00:00:00.001 Started by upstream project "autotest-per-patch" build number 132188 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.118 Fetching changes from the remote Git repository 00:00:00.121 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.162 Using shallow fetch with depth 1 00:00:00.162 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.162 > git --version # timeout=10 00:00:00.198 > git --version # 'git version 2.39.2' 00:00:00.198 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.231 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.231 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.826 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.837 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.848 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.849 > git config core.sparsecheckout # timeout=10 00:00:05.859 > git read-tree -mu HEAD # timeout=10 00:00:05.894 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.916 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.916 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.023 [Pipeline] Start of Pipeline 00:00:06.037 [Pipeline] library 00:00:06.038 Loading library shm_lib@master 00:00:06.039 Library shm_lib@master is cached. Copying from home. 00:00:06.052 [Pipeline] node 00:00:06.063 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.064 [Pipeline] { 00:00:06.073 [Pipeline] catchError 00:00:06.074 [Pipeline] { 00:00:06.086 [Pipeline] wrap 00:00:06.094 [Pipeline] { 00:00:06.103 [Pipeline] stage 00:00:06.105 [Pipeline] { (Prologue) 00:00:06.124 [Pipeline] echo 00:00:06.126 Node: VM-host-SM9 00:00:06.133 [Pipeline] cleanWs 00:00:06.144 [WS-CLEANUP] Deleting project workspace... 00:00:06.144 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.150 [WS-CLEANUP] done 00:00:06.338 [Pipeline] setCustomBuildProperty 00:00:06.405 [Pipeline] httpRequest 00:00:06.780 [Pipeline] echo 00:00:06.781 Sorcerer 10.211.164.20 is alive 00:00:06.790 [Pipeline] retry 00:00:06.791 [Pipeline] { 00:00:06.805 [Pipeline] httpRequest 00:00:06.809 HttpMethod: GET 00:00:06.810 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.810 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.820 Response Code: HTTP/1.1 200 OK 00:00:06.820 Success: Status code 200 is in the accepted range: 200,404 00:00:06.821 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.026 [Pipeline] } 00:00:10.044 [Pipeline] // retry 00:00:10.053 [Pipeline] sh 00:00:10.336 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.353 [Pipeline] httpRequest 00:00:10.896 [Pipeline] echo 00:00:10.899 Sorcerer 10.211.164.20 is alive 00:00:10.909 [Pipeline] retry 00:00:10.911 [Pipeline] { 00:00:10.927 [Pipeline] httpRequest 00:00:10.933 HttpMethod: GET 00:00:10.933 URL: http://10.211.164.20/packages/spdk_eba7e4aea7b16751ff079dfd6d8954df3228fff4.tar.gz 00:00:10.934 Sending request to url: http://10.211.164.20/packages/spdk_eba7e4aea7b16751ff079dfd6d8954df3228fff4.tar.gz 00:00:10.955 Response Code: HTTP/1.1 200 OK 00:00:10.956 Success: Status code 200 is in the accepted range: 200,404 00:00:10.957 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_eba7e4aea7b16751ff079dfd6d8954df3228fff4.tar.gz 00:01:16.483 [Pipeline] } 00:01:16.501 [Pipeline] // retry 00:01:16.508 [Pipeline] sh 00:01:16.846 + tar --no-same-owner -xf spdk_eba7e4aea7b16751ff079dfd6d8954df3228fff4.tar.gz 00:01:19.400 [Pipeline] sh 00:01:19.680 + git -C spdk log --oneline -n5 00:01:19.680 eba7e4aea nvmf: added support for add/delete host wrt referral 00:01:19.680 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:01:19.680 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:01:19.680 892c29f49 nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:01:19.680 fb6c49f2f bdev: add spdk_bdev_get_nvme_nsid() 00:01:19.698 [Pipeline] writeFile 00:01:19.714 [Pipeline] sh 00:01:19.997 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:20.005 [Pipeline] sh 00:01:20.278 + cat autorun-spdk.conf 00:01:20.278 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.278 SPDK_TEST_NVMF=1 00:01:20.278 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.278 SPDK_TEST_URING=1 00:01:20.278 SPDK_TEST_USDT=1 00:01:20.278 SPDK_RUN_UBSAN=1 00:01:20.278 NET_TYPE=virt 00:01:20.278 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.286 RUN_NIGHTLY=0 00:01:20.288 [Pipeline] } 00:01:20.304 [Pipeline] // stage 00:01:20.321 [Pipeline] stage 00:01:20.323 [Pipeline] { (Run VM) 00:01:20.338 [Pipeline] sh 00:01:20.621 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:20.621 + echo 'Start stage prepare_nvme.sh' 00:01:20.621 Start stage prepare_nvme.sh 00:01:20.621 + [[ -n 3 ]] 00:01:20.621 + disk_prefix=ex3 00:01:20.621 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:20.621 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:20.621 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:20.621 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.621 ++ SPDK_TEST_NVMF=1 00:01:20.621 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.621 ++ SPDK_TEST_URING=1 00:01:20.621 ++ SPDK_TEST_USDT=1 00:01:20.621 ++ SPDK_RUN_UBSAN=1 00:01:20.621 ++ NET_TYPE=virt 00:01:20.621 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.621 ++ RUN_NIGHTLY=0 00:01:20.621 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:20.621 + nvme_files=() 00:01:20.621 + declare -A nvme_files 00:01:20.621 + backend_dir=/var/lib/libvirt/images/backends 00:01:20.621 + nvme_files['nvme.img']=5G 00:01:20.621 + nvme_files['nvme-cmb.img']=5G 00:01:20.621 + nvme_files['nvme-multi0.img']=4G 00:01:20.621 + nvme_files['nvme-multi1.img']=4G 00:01:20.621 + nvme_files['nvme-multi2.img']=4G 00:01:20.621 + nvme_files['nvme-openstack.img']=8G 00:01:20.621 + nvme_files['nvme-zns.img']=5G 00:01:20.621 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:20.621 + (( SPDK_TEST_FTL == 1 )) 00:01:20.621 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:20.621 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:20.621 + for nvme in "${!nvme_files[@]}" 00:01:20.621 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:20.621 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.621 + for nvme in "${!nvme_files[@]}" 00:01:20.621 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:20.621 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.621 + for nvme in "${!nvme_files[@]}" 00:01:20.621 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:20.880 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.880 + for nvme in "${!nvme_files[@]}" 00:01:20.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:20.880 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.880 + for nvme in "${!nvme_files[@]}" 00:01:20.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:20.880 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.880 + for nvme in "${!nvme_files[@]}" 00:01:20.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:21.139 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.139 + for nvme in "${!nvme_files[@]}" 00:01:21.139 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:21.139 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.139 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:21.399 + echo 'End stage prepare_nvme.sh' 00:01:21.399 End stage prepare_nvme.sh 00:01:21.410 [Pipeline] sh 00:01:21.691 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.691 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:01:21.691 00:01:21.691 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:21.691 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:21.691 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:21.691 HELP=0 00:01:21.691 DRY_RUN=0 00:01:21.691 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:21.691 NVME_DISKS_TYPE=nvme,nvme, 00:01:21.691 NVME_AUTO_CREATE=0 00:01:21.691 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:21.691 NVME_CMB=,, 00:01:21.691 NVME_PMR=,, 00:01:21.691 NVME_ZNS=,, 00:01:21.691 NVME_MS=,, 00:01:21.691 NVME_FDP=,, 00:01:21.691 SPDK_VAGRANT_DISTRO=fedora39 00:01:21.691 SPDK_VAGRANT_VMCPU=10 00:01:21.691 SPDK_VAGRANT_VMRAM=12288 00:01:21.691 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.691 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:21.691 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.691 SPDK_OPENSTACK_NETWORK=0 00:01:21.691 VAGRANT_PACKAGE_BOX=0 00:01:21.691 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.691 FORCE_DISTRO=true 00:01:21.691 VAGRANT_BOX_VERSION= 00:01:21.691 EXTRA_VAGRANTFILES= 00:01:21.691 NIC_MODEL=e1000 00:01:21.691 00:01:21.691 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:21.691 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:24.979 Bringing machine 'default' up with 'libvirt' provider... 00:01:25.237 ==> default: Creating image (snapshot of base box volume). 00:01:25.497 ==> default: Creating domain with the following settings... 00:01:25.497 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731406993_b163ee0fad5ffe17a03a 00:01:25.497 ==> default: -- Domain type: kvm 00:01:25.497 ==> default: -- Cpus: 10 00:01:25.497 ==> default: -- Feature: acpi 00:01:25.497 ==> default: -- Feature: apic 00:01:25.497 ==> default: -- Feature: pae 00:01:25.497 ==> default: -- Memory: 12288M 00:01:25.497 ==> default: -- Memory Backing: hugepages: 00:01:25.497 ==> default: -- Management MAC: 00:01:25.497 ==> default: -- Loader: 00:01:25.497 ==> default: -- Nvram: 00:01:25.497 ==> default: -- Base box: spdk/fedora39 00:01:25.497 ==> default: -- Storage pool: default 00:01:25.497 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731406993_b163ee0fad5ffe17a03a.img (20G) 00:01:25.497 ==> default: -- Volume Cache: default 00:01:25.497 ==> default: -- Kernel: 00:01:25.497 ==> default: -- Initrd: 00:01:25.497 ==> default: -- Graphics Type: vnc 00:01:25.497 ==> default: -- Graphics Port: -1 00:01:25.497 ==> default: -- Graphics IP: 127.0.0.1 00:01:25.497 ==> default: -- Graphics Password: Not defined 00:01:25.497 ==> default: -- Video Type: cirrus 00:01:25.497 ==> default: -- Video VRAM: 9216 00:01:25.497 ==> default: -- Sound Type: 00:01:25.497 ==> default: -- Keymap: en-us 00:01:25.497 ==> default: -- TPM Path: 00:01:25.497 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:25.497 ==> default: -- Command line args: 00:01:25.497 ==> default: -> value=-device, 00:01:25.497 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:25.497 ==> default: -> value=-drive, 00:01:25.497 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:25.497 ==> default: -> value=-device, 00:01:25.497 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.497 ==> default: -> value=-device, 00:01:25.497 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:25.497 ==> default: -> value=-drive, 00:01:25.497 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:25.497 ==> default: -> value=-device, 00:01:25.497 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.497 ==> default: -> value=-drive, 00:01:25.497 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:25.497 ==> default: -> value=-device, 00:01:25.498 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.498 ==> default: -> value=-drive, 00:01:25.498 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:25.498 ==> default: -> value=-device, 00:01:25.498 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.498 ==> default: Creating shared folders metadata... 00:01:25.498 ==> default: Starting domain. 00:01:26.877 ==> default: Waiting for domain to get an IP address... 00:01:44.963 ==> default: Waiting for SSH to become available... 00:01:44.963 ==> default: Configuring and enabling network interfaces... 00:01:47.542 default: SSH address: 192.168.121.58:22 00:01:47.542 default: SSH username: vagrant 00:01:47.542 default: SSH auth method: private key 00:01:49.444 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:57.560 ==> default: Mounting SSHFS shared folder... 00:01:58.126 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:58.126 ==> default: Checking Mount.. 00:01:59.503 ==> default: Folder Successfully Mounted! 00:01:59.503 ==> default: Running provisioner: file... 00:02:00.070 default: ~/.gitconfig => .gitconfig 00:02:00.637 00:02:00.637 SUCCESS! 00:02:00.637 00:02:00.637 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:00.637 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.637 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:00.637 00:02:00.645 [Pipeline] } 00:02:00.661 [Pipeline] // stage 00:02:00.671 [Pipeline] dir 00:02:00.671 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:00.673 [Pipeline] { 00:02:00.687 [Pipeline] catchError 00:02:00.689 [Pipeline] { 00:02:00.702 [Pipeline] sh 00:02:00.980 + vagrant ssh-config --host vagrant 00:02:00.980 + sed -ne /^Host/,$p 00:02:00.980 + tee ssh_conf 00:02:05.168 Host vagrant 00:02:05.168 HostName 192.168.121.58 00:02:05.168 User vagrant 00:02:05.168 Port 22 00:02:05.168 UserKnownHostsFile /dev/null 00:02:05.168 StrictHostKeyChecking no 00:02:05.168 PasswordAuthentication no 00:02:05.168 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:05.168 IdentitiesOnly yes 00:02:05.168 LogLevel FATAL 00:02:05.168 ForwardAgent yes 00:02:05.168 ForwardX11 yes 00:02:05.168 00:02:05.181 [Pipeline] withEnv 00:02:05.184 [Pipeline] { 00:02:05.198 [Pipeline] sh 00:02:05.476 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:05.476 source /etc/os-release 00:02:05.476 [[ -e /image.version ]] && img=$(< /image.version) 00:02:05.476 # Minimal, systemd-like check. 00:02:05.477 if [[ -e /.dockerenv ]]; then 00:02:05.477 # Clear garbage from the node's name: 00:02:05.477 # agt-er_autotest_547-896 -> autotest_547-896 00:02:05.477 # $HOSTNAME is the actual container id 00:02:05.477 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:05.477 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:05.477 # We can assume this is a mount from a host where container is running, 00:02:05.477 # so fetch its hostname to easily identify the target swarm worker. 00:02:05.477 container="$(< /etc/hostname) ($agent)" 00:02:05.477 else 00:02:05.477 # Fallback 00:02:05.477 container=$agent 00:02:05.477 fi 00:02:05.477 fi 00:02:05.477 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:05.477 00:02:05.745 [Pipeline] } 00:02:05.761 [Pipeline] // withEnv 00:02:05.769 [Pipeline] setCustomBuildProperty 00:02:05.785 [Pipeline] stage 00:02:05.787 [Pipeline] { (Tests) 00:02:05.804 [Pipeline] sh 00:02:06.082 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:06.352 [Pipeline] sh 00:02:06.629 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:06.902 [Pipeline] timeout 00:02:06.903 Timeout set to expire in 1 hr 0 min 00:02:06.905 [Pipeline] { 00:02:06.920 [Pipeline] sh 00:02:07.200 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:07.767 HEAD is now at eba7e4aea nvmf: added support for add/delete host wrt referral 00:02:07.779 [Pipeline] sh 00:02:08.058 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:08.330 [Pipeline] sh 00:02:08.609 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:08.884 [Pipeline] sh 00:02:09.211 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:09.211 ++ readlink -f spdk_repo 00:02:09.211 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:09.211 + [[ -n /home/vagrant/spdk_repo ]] 00:02:09.211 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:09.211 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:09.211 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:09.211 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:09.211 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:09.211 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:09.211 + cd /home/vagrant/spdk_repo 00:02:09.211 + source /etc/os-release 00:02:09.211 ++ NAME='Fedora Linux' 00:02:09.211 ++ VERSION='39 (Cloud Edition)' 00:02:09.211 ++ ID=fedora 00:02:09.211 ++ VERSION_ID=39 00:02:09.211 ++ VERSION_CODENAME= 00:02:09.211 ++ PLATFORM_ID=platform:f39 00:02:09.211 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:09.211 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.211 ++ LOGO=fedora-logo-icon 00:02:09.211 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:09.211 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.211 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:09.211 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.211 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.211 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.211 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:09.211 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.211 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:09.211 ++ SUPPORT_END=2024-11-12 00:02:09.211 ++ VARIANT='Cloud Edition' 00:02:09.211 ++ VARIANT_ID=cloud 00:02:09.211 + uname -a 00:02:09.211 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:09.211 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:09.789 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:09.789 Hugepages 00:02:09.789 node hugesize free / total 00:02:09.789 node0 1048576kB 0 / 0 00:02:09.789 node0 2048kB 0 / 0 00:02:09.789 00:02:09.789 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.789 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.789 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:09.789 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:09.789 + rm -f /tmp/spdk-ld-path 00:02:09.789 + source autorun-spdk.conf 00:02:09.789 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.789 ++ SPDK_TEST_NVMF=1 00:02:09.789 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:09.789 ++ SPDK_TEST_URING=1 00:02:09.789 ++ SPDK_TEST_USDT=1 00:02:09.789 ++ SPDK_RUN_UBSAN=1 00:02:09.789 ++ NET_TYPE=virt 00:02:09.789 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.789 ++ RUN_NIGHTLY=0 00:02:09.789 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.789 + [[ -n '' ]] 00:02:09.789 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:09.789 + for M in /var/spdk/build-*-manifest.txt 00:02:09.789 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:09.789 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.789 + for M in /var/spdk/build-*-manifest.txt 00:02:09.789 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.789 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.789 + for M in /var/spdk/build-*-manifest.txt 00:02:09.789 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.789 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.048 ++ uname 00:02:10.048 + [[ Linux == \L\i\n\u\x ]] 00:02:10.048 + sudo dmesg -T 00:02:10.048 + sudo dmesg --clear 00:02:10.048 + dmesg_pid=5263 00:02:10.048 + sudo dmesg -Tw 00:02:10.048 + [[ Fedora Linux == FreeBSD ]] 00:02:10.048 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.048 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.048 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.048 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.048 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.048 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.048 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.048 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.048 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.048 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.048 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.048 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.048 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.048 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.048 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.048 10:23:58 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:10.048 10:23:58 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.048 10:23:58 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.048 10:23:58 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:10.048 10:23:58 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.048 10:23:58 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:10.048 10:23:58 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:10.048 10:23:58 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:10.048 10:23:58 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:10.048 10:23:58 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.048 10:23:58 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:10.049 10:23:58 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:10.049 10:23:58 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.049 10:23:58 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:10.049 10:23:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:10.049 10:23:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:10.049 10:23:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.049 10:23:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.049 10:23:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.049 10:23:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.049 10:23:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.049 10:23:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.049 10:23:58 -- paths/export.sh@5 -- $ export PATH 00:02:10.049 10:23:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.049 10:23:58 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:10.049 10:23:58 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:10.049 10:23:58 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731407038.XXXXXX 00:02:10.049 10:23:58 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731407038.QRAkYu 00:02:10.049 10:23:58 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:10.049 10:23:58 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:10.049 10:23:58 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:10.049 10:23:58 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:10.049 10:23:58 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.049 10:23:58 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:10.049 10:23:58 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:10.049 10:23:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.049 10:23:58 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:10.049 10:23:58 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:10.049 10:23:58 -- pm/common@17 -- $ local monitor 00:02:10.049 10:23:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.049 10:23:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.049 10:23:58 -- pm/common@25 -- $ sleep 1 00:02:10.049 10:23:58 -- pm/common@21 -- $ date +%s 00:02:10.049 10:23:58 -- pm/common@21 -- $ date +%s 00:02:10.049 10:23:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731407038 00:02:10.049 10:23:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731407038 00:02:10.307 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731407038_collect-vmstat.pm.log 00:02:10.307 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731407038_collect-cpu-load.pm.log 00:02:11.243 10:23:59 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:11.243 10:23:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.243 10:23:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.243 10:23:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:11.243 10:23:59 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.243 Tue Nov 12 10:23:59 AM UTC 2024 00:02:11.243 10:23:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.243 v25.01-pre-159-geba7e4aea 00:02:11.243 10:23:59 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:11.243 10:23:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.243 10:23:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.243 10:23:59 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:11.243 10:23:59 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:11.243 10:23:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.243 ************************************ 00:02:11.243 START TEST ubsan 00:02:11.243 ************************************ 00:02:11.243 using ubsan 00:02:11.243 10:23:59 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:11.243 00:02:11.243 real 0m0.000s 00:02:11.243 user 0m0.000s 00:02:11.243 sys 0m0.000s 00:02:11.243 10:23:59 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:11.243 10:23:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.243 ************************************ 00:02:11.243 END TEST ubsan 00:02:11.243 ************************************ 00:02:11.243 10:23:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:11.243 10:23:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.243 10:23:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.243 10:23:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.243 10:23:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.243 10:23:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.243 10:23:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.243 10:23:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.243 10:23:59 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:11.243 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:11.243 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:11.810 Using 'verbs' RDMA provider 00:02:27.631 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:39.855 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:39.855 Creating mk/config.mk...done. 00:02:39.855 Creating mk/cc.flags.mk...done. 00:02:39.855 Type 'make' to build. 00:02:39.855 10:24:27 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:39.855 10:24:27 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:39.855 10:24:27 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:39.855 10:24:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.855 ************************************ 00:02:39.855 START TEST make 00:02:39.855 ************************************ 00:02:39.855 10:24:27 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:39.855 make[1]: Nothing to be done for 'all'. 00:02:52.050 The Meson build system 00:02:52.050 Version: 1.5.0 00:02:52.050 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:52.050 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:52.050 Build type: native build 00:02:52.050 Program cat found: YES (/usr/bin/cat) 00:02:52.050 Project name: DPDK 00:02:52.050 Project version: 24.03.0 00:02:52.050 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:52.050 C linker for the host machine: cc ld.bfd 2.40-14 00:02:52.050 Host machine cpu family: x86_64 00:02:52.050 Host machine cpu: x86_64 00:02:52.050 Message: ## Building in Developer Mode ## 00:02:52.050 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:52.050 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:52.050 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:52.050 Program python3 found: YES (/usr/bin/python3) 00:02:52.050 Program cat found: YES (/usr/bin/cat) 00:02:52.050 Compiler for C supports arguments -march=native: YES 00:02:52.050 Checking for size of "void *" : 8 00:02:52.050 Checking for size of "void *" : 8 (cached) 00:02:52.050 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:52.050 Library m found: YES 00:02:52.050 Library numa found: YES 00:02:52.050 Has header "numaif.h" : YES 00:02:52.050 Library fdt found: NO 00:02:52.050 Library execinfo found: NO 00:02:52.050 Has header "execinfo.h" : YES 00:02:52.050 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:52.050 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:52.050 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:52.050 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:52.050 Run-time dependency openssl found: YES 3.1.1 00:02:52.050 Run-time dependency libpcap found: YES 1.10.4 00:02:52.050 Has header "pcap.h" with dependency libpcap: YES 00:02:52.050 Compiler for C supports arguments -Wcast-qual: YES 00:02:52.050 Compiler for C supports arguments -Wdeprecated: YES 00:02:52.050 Compiler for C supports arguments -Wformat: YES 00:02:52.050 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:52.050 Compiler for C supports arguments -Wformat-security: NO 00:02:52.050 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.050 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:52.050 Compiler for C supports arguments -Wnested-externs: YES 00:02:52.050 Compiler for C supports arguments -Wold-style-definition: YES 00:02:52.050 Compiler for C supports arguments -Wpointer-arith: YES 00:02:52.050 Compiler for C supports arguments -Wsign-compare: YES 00:02:52.050 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:52.050 Compiler for C supports arguments -Wundef: YES 00:02:52.050 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.050 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:52.050 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:52.050 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.050 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:52.050 Program objdump found: YES (/usr/bin/objdump) 00:02:52.050 Compiler for C supports arguments -mavx512f: YES 00:02:52.050 Checking if "AVX512 checking" compiles: YES 00:02:52.050 Fetching value of define "__SSE4_2__" : 1 00:02:52.050 Fetching value of define "__AES__" : 1 00:02:52.050 Fetching value of define "__AVX__" : 1 00:02:52.050 Fetching value of define "__AVX2__" : 1 00:02:52.050 Fetching value of define "__AVX512BW__" : (undefined) 00:02:52.050 Fetching value of define "__AVX512CD__" : (undefined) 00:02:52.050 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:52.050 Fetching value of define "__AVX512F__" : (undefined) 00:02:52.050 Fetching value of define "__AVX512VL__" : (undefined) 00:02:52.050 Fetching value of define "__PCLMUL__" : 1 00:02:52.050 Fetching value of define "__RDRND__" : 1 00:02:52.050 Fetching value of define "__RDSEED__" : 1 00:02:52.050 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:52.050 Fetching value of define "__znver1__" : (undefined) 00:02:52.050 Fetching value of define "__znver2__" : (undefined) 00:02:52.050 Fetching value of define "__znver3__" : (undefined) 00:02:52.050 Fetching value of define "__znver4__" : (undefined) 00:02:52.050 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:52.050 Message: lib/log: Defining dependency "log" 00:02:52.050 Message: lib/kvargs: Defining dependency "kvargs" 00:02:52.050 Message: lib/telemetry: Defining dependency "telemetry" 00:02:52.051 Checking for function "getentropy" : NO 00:02:52.051 Message: lib/eal: Defining dependency "eal" 00:02:52.051 Message: lib/ring: Defining dependency "ring" 00:02:52.051 Message: lib/rcu: Defining dependency "rcu" 00:02:52.051 Message: lib/mempool: Defining dependency "mempool" 00:02:52.051 Message: lib/mbuf: Defining dependency "mbuf" 00:02:52.051 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:52.051 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:52.051 Compiler for C supports arguments -mpclmul: YES 00:02:52.051 Compiler for C supports arguments -maes: YES 00:02:52.051 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.051 Compiler for C supports arguments -mavx512bw: YES 00:02:52.051 Compiler for C supports arguments -mavx512dq: YES 00:02:52.051 Compiler for C supports arguments -mavx512vl: YES 00:02:52.051 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:52.051 Compiler for C supports arguments -mavx2: YES 00:02:52.051 Compiler for C supports arguments -mavx: YES 00:02:52.051 Message: lib/net: Defining dependency "net" 00:02:52.051 Message: lib/meter: Defining dependency "meter" 00:02:52.051 Message: lib/ethdev: Defining dependency "ethdev" 00:02:52.051 Message: lib/pci: Defining dependency "pci" 00:02:52.051 Message: lib/cmdline: Defining dependency "cmdline" 00:02:52.051 Message: lib/hash: Defining dependency "hash" 00:02:52.051 Message: lib/timer: Defining dependency "timer" 00:02:52.051 Message: lib/compressdev: Defining dependency "compressdev" 00:02:52.051 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:52.051 Message: lib/dmadev: Defining dependency "dmadev" 00:02:52.051 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:52.051 Message: lib/power: Defining dependency "power" 00:02:52.051 Message: lib/reorder: Defining dependency "reorder" 00:02:52.051 Message: lib/security: Defining dependency "security" 00:02:52.051 Has header "linux/userfaultfd.h" : YES 00:02:52.051 Has header "linux/vduse.h" : YES 00:02:52.051 Message: lib/vhost: Defining dependency "vhost" 00:02:52.051 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:52.051 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:52.051 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:52.051 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:52.051 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:52.051 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:52.051 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:52.051 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:52.051 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:52.051 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:52.051 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:52.051 Configuring doxy-api-html.conf using configuration 00:02:52.051 Configuring doxy-api-man.conf using configuration 00:02:52.051 Program mandb found: YES (/usr/bin/mandb) 00:02:52.051 Program sphinx-build found: NO 00:02:52.051 Configuring rte_build_config.h using configuration 00:02:52.051 Message: 00:02:52.051 ================= 00:02:52.051 Applications Enabled 00:02:52.051 ================= 00:02:52.051 00:02:52.051 apps: 00:02:52.051 00:02:52.051 00:02:52.051 Message: 00:02:52.051 ================= 00:02:52.051 Libraries Enabled 00:02:52.051 ================= 00:02:52.051 00:02:52.051 libs: 00:02:52.051 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:52.051 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:52.051 cryptodev, dmadev, power, reorder, security, vhost, 00:02:52.051 00:02:52.051 Message: 00:02:52.051 =============== 00:02:52.051 Drivers Enabled 00:02:52.051 =============== 00:02:52.051 00:02:52.051 common: 00:02:52.051 00:02:52.051 bus: 00:02:52.051 pci, vdev, 00:02:52.051 mempool: 00:02:52.051 ring, 00:02:52.051 dma: 00:02:52.051 00:02:52.051 net: 00:02:52.051 00:02:52.051 crypto: 00:02:52.051 00:02:52.051 compress: 00:02:52.051 00:02:52.051 vdpa: 00:02:52.051 00:02:52.051 00:02:52.051 Message: 00:02:52.051 ================= 00:02:52.051 Content Skipped 00:02:52.051 ================= 00:02:52.051 00:02:52.051 apps: 00:02:52.051 dumpcap: explicitly disabled via build config 00:02:52.051 graph: explicitly disabled via build config 00:02:52.051 pdump: explicitly disabled via build config 00:02:52.051 proc-info: explicitly disabled via build config 00:02:52.051 test-acl: explicitly disabled via build config 00:02:52.051 test-bbdev: explicitly disabled via build config 00:02:52.051 test-cmdline: explicitly disabled via build config 00:02:52.051 test-compress-perf: explicitly disabled via build config 00:02:52.051 test-crypto-perf: explicitly disabled via build config 00:02:52.051 test-dma-perf: explicitly disabled via build config 00:02:52.051 test-eventdev: explicitly disabled via build config 00:02:52.051 test-fib: explicitly disabled via build config 00:02:52.051 test-flow-perf: explicitly disabled via build config 00:02:52.051 test-gpudev: explicitly disabled via build config 00:02:52.051 test-mldev: explicitly disabled via build config 00:02:52.051 test-pipeline: explicitly disabled via build config 00:02:52.051 test-pmd: explicitly disabled via build config 00:02:52.051 test-regex: explicitly disabled via build config 00:02:52.051 test-sad: explicitly disabled via build config 00:02:52.051 test-security-perf: explicitly disabled via build config 00:02:52.051 00:02:52.051 libs: 00:02:52.051 argparse: explicitly disabled via build config 00:02:52.051 metrics: explicitly disabled via build config 00:02:52.051 acl: explicitly disabled via build config 00:02:52.051 bbdev: explicitly disabled via build config 00:02:52.051 bitratestats: explicitly disabled via build config 00:02:52.051 bpf: explicitly disabled via build config 00:02:52.051 cfgfile: explicitly disabled via build config 00:02:52.051 distributor: explicitly disabled via build config 00:02:52.051 efd: explicitly disabled via build config 00:02:52.051 eventdev: explicitly disabled via build config 00:02:52.051 dispatcher: explicitly disabled via build config 00:02:52.051 gpudev: explicitly disabled via build config 00:02:52.051 gro: explicitly disabled via build config 00:02:52.051 gso: explicitly disabled via build config 00:02:52.051 ip_frag: explicitly disabled via build config 00:02:52.051 jobstats: explicitly disabled via build config 00:02:52.051 latencystats: explicitly disabled via build config 00:02:52.051 lpm: explicitly disabled via build config 00:02:52.051 member: explicitly disabled via build config 00:02:52.051 pcapng: explicitly disabled via build config 00:02:52.051 rawdev: explicitly disabled via build config 00:02:52.051 regexdev: explicitly disabled via build config 00:02:52.051 mldev: explicitly disabled via build config 00:02:52.051 rib: explicitly disabled via build config 00:02:52.051 sched: explicitly disabled via build config 00:02:52.051 stack: explicitly disabled via build config 00:02:52.051 ipsec: explicitly disabled via build config 00:02:52.051 pdcp: explicitly disabled via build config 00:02:52.051 fib: explicitly disabled via build config 00:02:52.051 port: explicitly disabled via build config 00:02:52.051 pdump: explicitly disabled via build config 00:02:52.051 table: explicitly disabled via build config 00:02:52.051 pipeline: explicitly disabled via build config 00:02:52.051 graph: explicitly disabled via build config 00:02:52.051 node: explicitly disabled via build config 00:02:52.052 00:02:52.052 drivers: 00:02:52.052 common/cpt: not in enabled drivers build config 00:02:52.052 common/dpaax: not in enabled drivers build config 00:02:52.052 common/iavf: not in enabled drivers build config 00:02:52.052 common/idpf: not in enabled drivers build config 00:02:52.052 common/ionic: not in enabled drivers build config 00:02:52.052 common/mvep: not in enabled drivers build config 00:02:52.052 common/octeontx: not in enabled drivers build config 00:02:52.052 bus/auxiliary: not in enabled drivers build config 00:02:52.052 bus/cdx: not in enabled drivers build config 00:02:52.052 bus/dpaa: not in enabled drivers build config 00:02:52.052 bus/fslmc: not in enabled drivers build config 00:02:52.052 bus/ifpga: not in enabled drivers build config 00:02:52.052 bus/platform: not in enabled drivers build config 00:02:52.052 bus/uacce: not in enabled drivers build config 00:02:52.052 bus/vmbus: not in enabled drivers build config 00:02:52.052 common/cnxk: not in enabled drivers build config 00:02:52.052 common/mlx5: not in enabled drivers build config 00:02:52.052 common/nfp: not in enabled drivers build config 00:02:52.052 common/nitrox: not in enabled drivers build config 00:02:52.052 common/qat: not in enabled drivers build config 00:02:52.052 common/sfc_efx: not in enabled drivers build config 00:02:52.052 mempool/bucket: not in enabled drivers build config 00:02:52.052 mempool/cnxk: not in enabled drivers build config 00:02:52.052 mempool/dpaa: not in enabled drivers build config 00:02:52.052 mempool/dpaa2: not in enabled drivers build config 00:02:52.052 mempool/octeontx: not in enabled drivers build config 00:02:52.052 mempool/stack: not in enabled drivers build config 00:02:52.052 dma/cnxk: not in enabled drivers build config 00:02:52.052 dma/dpaa: not in enabled drivers build config 00:02:52.052 dma/dpaa2: not in enabled drivers build config 00:02:52.052 dma/hisilicon: not in enabled drivers build config 00:02:52.052 dma/idxd: not in enabled drivers build config 00:02:52.052 dma/ioat: not in enabled drivers build config 00:02:52.052 dma/skeleton: not in enabled drivers build config 00:02:52.052 net/af_packet: not in enabled drivers build config 00:02:52.052 net/af_xdp: not in enabled drivers build config 00:02:52.052 net/ark: not in enabled drivers build config 00:02:52.052 net/atlantic: not in enabled drivers build config 00:02:52.052 net/avp: not in enabled drivers build config 00:02:52.052 net/axgbe: not in enabled drivers build config 00:02:52.052 net/bnx2x: not in enabled drivers build config 00:02:52.052 net/bnxt: not in enabled drivers build config 00:02:52.052 net/bonding: not in enabled drivers build config 00:02:52.052 net/cnxk: not in enabled drivers build config 00:02:52.052 net/cpfl: not in enabled drivers build config 00:02:52.052 net/cxgbe: not in enabled drivers build config 00:02:52.052 net/dpaa: not in enabled drivers build config 00:02:52.052 net/dpaa2: not in enabled drivers build config 00:02:52.052 net/e1000: not in enabled drivers build config 00:02:52.052 net/ena: not in enabled drivers build config 00:02:52.052 net/enetc: not in enabled drivers build config 00:02:52.052 net/enetfec: not in enabled drivers build config 00:02:52.052 net/enic: not in enabled drivers build config 00:02:52.052 net/failsafe: not in enabled drivers build config 00:02:52.052 net/fm10k: not in enabled drivers build config 00:02:52.052 net/gve: not in enabled drivers build config 00:02:52.052 net/hinic: not in enabled drivers build config 00:02:52.052 net/hns3: not in enabled drivers build config 00:02:52.052 net/i40e: not in enabled drivers build config 00:02:52.052 net/iavf: not in enabled drivers build config 00:02:52.052 net/ice: not in enabled drivers build config 00:02:52.052 net/idpf: not in enabled drivers build config 00:02:52.052 net/igc: not in enabled drivers build config 00:02:52.052 net/ionic: not in enabled drivers build config 00:02:52.052 net/ipn3ke: not in enabled drivers build config 00:02:52.052 net/ixgbe: not in enabled drivers build config 00:02:52.052 net/mana: not in enabled drivers build config 00:02:52.052 net/memif: not in enabled drivers build config 00:02:52.052 net/mlx4: not in enabled drivers build config 00:02:52.052 net/mlx5: not in enabled drivers build config 00:02:52.052 net/mvneta: not in enabled drivers build config 00:02:52.052 net/mvpp2: not in enabled drivers build config 00:02:52.052 net/netvsc: not in enabled drivers build config 00:02:52.052 net/nfb: not in enabled drivers build config 00:02:52.052 net/nfp: not in enabled drivers build config 00:02:52.052 net/ngbe: not in enabled drivers build config 00:02:52.052 net/null: not in enabled drivers build config 00:02:52.052 net/octeontx: not in enabled drivers build config 00:02:52.052 net/octeon_ep: not in enabled drivers build config 00:02:52.052 net/pcap: not in enabled drivers build config 00:02:52.052 net/pfe: not in enabled drivers build config 00:02:52.052 net/qede: not in enabled drivers build config 00:02:52.052 net/ring: not in enabled drivers build config 00:02:52.052 net/sfc: not in enabled drivers build config 00:02:52.052 net/softnic: not in enabled drivers build config 00:02:52.052 net/tap: not in enabled drivers build config 00:02:52.052 net/thunderx: not in enabled drivers build config 00:02:52.052 net/txgbe: not in enabled drivers build config 00:02:52.052 net/vdev_netvsc: not in enabled drivers build config 00:02:52.052 net/vhost: not in enabled drivers build config 00:02:52.052 net/virtio: not in enabled drivers build config 00:02:52.052 net/vmxnet3: not in enabled drivers build config 00:02:52.052 raw/*: missing internal dependency, "rawdev" 00:02:52.052 crypto/armv8: not in enabled drivers build config 00:02:52.052 crypto/bcmfs: not in enabled drivers build config 00:02:52.052 crypto/caam_jr: not in enabled drivers build config 00:02:52.052 crypto/ccp: not in enabled drivers build config 00:02:52.052 crypto/cnxk: not in enabled drivers build config 00:02:52.052 crypto/dpaa_sec: not in enabled drivers build config 00:02:52.052 crypto/dpaa2_sec: not in enabled drivers build config 00:02:52.052 crypto/ipsec_mb: not in enabled drivers build config 00:02:52.052 crypto/mlx5: not in enabled drivers build config 00:02:52.052 crypto/mvsam: not in enabled drivers build config 00:02:52.052 crypto/nitrox: not in enabled drivers build config 00:02:52.052 crypto/null: not in enabled drivers build config 00:02:52.052 crypto/octeontx: not in enabled drivers build config 00:02:52.052 crypto/openssl: not in enabled drivers build config 00:02:52.052 crypto/scheduler: not in enabled drivers build config 00:02:52.052 crypto/uadk: not in enabled drivers build config 00:02:52.052 crypto/virtio: not in enabled drivers build config 00:02:52.052 compress/isal: not in enabled drivers build config 00:02:52.052 compress/mlx5: not in enabled drivers build config 00:02:52.052 compress/nitrox: not in enabled drivers build config 00:02:52.052 compress/octeontx: not in enabled drivers build config 00:02:52.052 compress/zlib: not in enabled drivers build config 00:02:52.052 regex/*: missing internal dependency, "regexdev" 00:02:52.052 ml/*: missing internal dependency, "mldev" 00:02:52.052 vdpa/ifc: not in enabled drivers build config 00:02:52.052 vdpa/mlx5: not in enabled drivers build config 00:02:52.052 vdpa/nfp: not in enabled drivers build config 00:02:52.052 vdpa/sfc: not in enabled drivers build config 00:02:52.052 event/*: missing internal dependency, "eventdev" 00:02:52.052 baseband/*: missing internal dependency, "bbdev" 00:02:52.052 gpu/*: missing internal dependency, "gpudev" 00:02:52.052 00:02:52.052 00:02:52.052 Build targets in project: 85 00:02:52.052 00:02:52.052 DPDK 24.03.0 00:02:52.052 00:02:52.052 User defined options 00:02:52.052 buildtype : debug 00:02:52.052 default_library : shared 00:02:52.052 libdir : lib 00:02:52.052 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:52.052 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:52.052 c_link_args : 00:02:52.052 cpu_instruction_set: native 00:02:52.052 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:52.052 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:52.052 enable_docs : false 00:02:52.052 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:52.052 enable_kmods : false 00:02:52.052 max_lcores : 128 00:02:52.052 tests : false 00:02:52.052 00:02:52.052 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.310 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:52.310 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:52.310 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:52.310 [3/268] Linking static target lib/librte_kvargs.a 00:02:52.310 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:52.310 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:52.568 [6/268] Linking static target lib/librte_log.a 00:02:52.827 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.085 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:53.085 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:53.085 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:53.085 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:53.344 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:53.344 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:53.344 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:53.344 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:53.344 [16/268] Linking static target lib/librte_telemetry.a 00:02:53.601 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:53.601 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.601 [19/268] Linking target lib/librte_log.so.24.1 00:02:53.601 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:53.859 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:53.860 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:53.860 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.860 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:54.118 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:54.118 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:54.118 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:54.118 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:54.376 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:54.376 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:54.376 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.376 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:54.376 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:54.376 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:54.634 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:54.892 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:54.892 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:54.892 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:54.892 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:55.150 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:55.150 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:55.150 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:55.150 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:55.150 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:55.150 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:55.408 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:55.667 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:55.667 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:55.667 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:55.925 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:55.925 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:56.183 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:56.183 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:56.183 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:56.183 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:56.183 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:56.183 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:56.441 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:56.699 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:56.699 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:56.699 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:56.958 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:56.958 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:56.958 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:56.958 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:56.958 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:56.958 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:57.524 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:57.524 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:57.524 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:57.783 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:57.783 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:57.783 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:57.783 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:57.783 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:57.783 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:57.783 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:58.048 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:58.048 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:58.048 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:58.306 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:58.306 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:58.306 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:58.565 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:58.565 [85/268] Linking static target lib/librte_ring.a 00:02:58.565 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:58.565 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:58.565 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:58.823 [89/268] Linking static target lib/librte_eal.a 00:02:58.823 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:58.823 [91/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.081 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:59.081 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:59.081 [94/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:59.081 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:59.081 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:59.081 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:59.081 [98/268] Linking static target lib/librte_mempool.a 00:02:59.081 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:59.081 [100/268] Linking static target lib/librte_rcu.a 00:02:59.339 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:59.339 [102/268] Linking static target lib/librte_mbuf.a 00:02:59.597 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:59.597 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:59.597 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:59.855 [106/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.855 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:59.856 [108/268] Linking static target lib/librte_meter.a 00:02:59.856 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:59.856 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:59.856 [111/268] Linking static target lib/librte_net.a 00:03:00.114 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.372 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.372 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:00.372 [115/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.372 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.372 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.630 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.630 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:01.196 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:01.196 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:01.455 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:01.455 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:01.455 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:01.455 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:01.455 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:01.455 [127/268] Linking static target lib/librte_pci.a 00:03:01.713 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:01.713 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:01.713 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:01.713 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:01.713 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:01.972 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:01.972 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.972 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:01.972 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:01.972 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:01.972 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:01.972 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:01.972 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:02.230 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:02.230 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:02.230 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:02.230 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:02.230 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:02.488 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:02.488 [147/268] Linking static target lib/librte_ethdev.a 00:03:02.488 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:02.746 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:02.746 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:02.746 [151/268] Linking static target lib/librte_cmdline.a 00:03:02.746 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:02.746 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:03.004 [154/268] Linking static target lib/librte_timer.a 00:03:03.004 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:03.262 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:03.262 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:03.262 [158/268] Linking static target lib/librte_hash.a 00:03:03.262 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:03.521 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:03.521 [161/268] Linking static target lib/librte_compressdev.a 00:03:03.521 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.521 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:03.779 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:03.780 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:04.038 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:04.038 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:04.038 [168/268] Linking static target lib/librte_dmadev.a 00:03:04.297 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:04.297 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:04.297 [171/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.297 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:04.556 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:04.556 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.556 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:04.556 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.556 [177/268] Linking static target lib/librte_cryptodev.a 00:03:04.814 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:05.072 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:05.072 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.072 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:05.072 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:05.072 [183/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:05.072 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:05.331 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:05.331 [186/268] Linking static target lib/librte_power.a 00:03:05.589 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:05.589 [188/268] Linking static target lib/librte_reorder.a 00:03:05.848 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:05.848 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:05.848 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:05.848 [192/268] Linking static target lib/librte_security.a 00:03:06.105 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:06.364 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.364 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:06.622 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.622 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.880 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:06.880 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:06.880 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:07.139 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:07.139 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.139 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:07.397 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:07.397 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:07.655 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:07.655 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:07.655 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:07.655 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:07.655 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:07.913 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:07.913 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:07.913 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:08.172 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:08.172 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:08.172 [216/268] Linking static target drivers/librte_bus_pci.a 00:03:08.172 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:08.172 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:08.172 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:08.172 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:08.172 [221/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:08.172 [222/268] Linking static target drivers/librte_bus_vdev.a 00:03:08.430 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:08.430 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:08.430 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:08.430 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:08.430 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.688 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.622 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:09.622 [230/268] Linking static target lib/librte_vhost.a 00:03:09.880 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.880 [232/268] Linking target lib/librte_eal.so.24.1 00:03:10.138 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:10.138 [234/268] Linking target lib/librte_pci.so.24.1 00:03:10.138 [235/268] Linking target lib/librte_meter.so.24.1 00:03:10.138 [236/268] Linking target lib/librte_ring.so.24.1 00:03:10.138 [237/268] Linking target lib/librte_timer.so.24.1 00:03:10.138 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:10.138 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:10.396 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:10.396 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:10.396 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:10.396 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:10.396 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:10.396 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:10.396 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:10.396 [247/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:10.396 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.396 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:10.396 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:10.396 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:10.396 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:10.655 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:10.655 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:10.655 [255/268] Linking target lib/librte_net.so.24.1 00:03:10.655 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:10.655 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:10.913 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:10.913 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:10.913 [260/268] Linking target lib/librte_hash.so.24.1 00:03:10.913 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:10.913 [262/268] Linking target lib/librte_security.so.24.1 00:03:10.913 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:10.913 [264/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.913 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:11.171 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:11.171 [267/268] Linking target lib/librte_power.so.24.1 00:03:11.171 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:11.171 INFO: autodetecting backend as ninja 00:03:11.171 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:37.712 CC lib/ut_mock/mock.o 00:03:37.712 CC lib/log/log.o 00:03:37.712 CC lib/log/log_flags.o 00:03:37.712 CC lib/log/log_deprecated.o 00:03:37.712 CC lib/ut/ut.o 00:03:37.712 LIB libspdk_ut.a 00:03:37.712 LIB libspdk_log.a 00:03:37.712 LIB libspdk_ut_mock.a 00:03:37.712 SO libspdk_ut.so.2.0 00:03:37.712 SO libspdk_ut_mock.so.6.0 00:03:37.712 SO libspdk_log.so.7.1 00:03:37.712 SYMLINK libspdk_ut_mock.so 00:03:37.712 SYMLINK libspdk_ut.so 00:03:37.712 SYMLINK libspdk_log.so 00:03:37.712 CC lib/dma/dma.o 00:03:37.712 CC lib/ioat/ioat.o 00:03:37.712 CC lib/util/base64.o 00:03:37.712 CC lib/util/bit_array.o 00:03:37.712 CC lib/util/cpuset.o 00:03:37.712 CC lib/util/crc16.o 00:03:37.712 CC lib/util/crc32.o 00:03:37.712 CC lib/util/crc32c.o 00:03:37.712 CXX lib/trace_parser/trace.o 00:03:37.712 CC lib/vfio_user/host/vfio_user_pci.o 00:03:37.712 CC lib/util/crc32_ieee.o 00:03:37.712 CC lib/vfio_user/host/vfio_user.o 00:03:37.712 CC lib/util/crc64.o 00:03:37.712 CC lib/util/dif.o 00:03:37.712 CC lib/util/fd.o 00:03:37.712 LIB libspdk_dma.a 00:03:37.712 CC lib/util/fd_group.o 00:03:37.712 LIB libspdk_ioat.a 00:03:37.712 SO libspdk_dma.so.5.0 00:03:37.712 SO libspdk_ioat.so.7.0 00:03:37.712 CC lib/util/file.o 00:03:37.712 CC lib/util/hexlify.o 00:03:37.712 SYMLINK libspdk_ioat.so 00:03:37.712 SYMLINK libspdk_dma.so 00:03:37.712 CC lib/util/iov.o 00:03:37.712 CC lib/util/math.o 00:03:37.712 CC lib/util/net.o 00:03:37.713 CC lib/util/pipe.o 00:03:37.713 LIB libspdk_vfio_user.a 00:03:37.713 SO libspdk_vfio_user.so.5.0 00:03:37.713 CC lib/util/strerror_tls.o 00:03:37.713 CC lib/util/string.o 00:03:37.713 SYMLINK libspdk_vfio_user.so 00:03:37.713 CC lib/util/uuid.o 00:03:37.713 CC lib/util/xor.o 00:03:37.713 CC lib/util/zipf.o 00:03:37.972 CC lib/util/md5.o 00:03:37.972 LIB libspdk_util.a 00:03:38.231 SO libspdk_util.so.10.1 00:03:38.231 SYMLINK libspdk_util.so 00:03:38.490 LIB libspdk_trace_parser.a 00:03:38.490 SO libspdk_trace_parser.so.6.0 00:03:38.490 CC lib/json/json_parse.o 00:03:38.490 CC lib/json/json_util.o 00:03:38.490 CC lib/json/json_write.o 00:03:38.490 CC lib/rdma_provider/common.o 00:03:38.490 CC lib/env_dpdk/env.o 00:03:38.490 CC lib/rdma_utils/rdma_utils.o 00:03:38.490 CC lib/conf/conf.o 00:03:38.490 CC lib/idxd/idxd.o 00:03:38.490 CC lib/vmd/vmd.o 00:03:38.490 SYMLINK libspdk_trace_parser.so 00:03:38.490 CC lib/vmd/led.o 00:03:38.749 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:38.749 CC lib/idxd/idxd_user.o 00:03:38.749 CC lib/env_dpdk/memory.o 00:03:38.749 LIB libspdk_conf.a 00:03:38.749 CC lib/env_dpdk/pci.o 00:03:38.749 SO libspdk_conf.so.6.0 00:03:38.749 LIB libspdk_rdma_utils.a 00:03:38.749 LIB libspdk_json.a 00:03:38.749 SO libspdk_rdma_utils.so.1.0 00:03:38.749 SO libspdk_json.so.6.0 00:03:38.749 SYMLINK libspdk_conf.so 00:03:38.749 CC lib/env_dpdk/init.o 00:03:38.749 SYMLINK libspdk_rdma_utils.so 00:03:38.749 CC lib/env_dpdk/threads.o 00:03:39.008 LIB libspdk_rdma_provider.a 00:03:39.008 SYMLINK libspdk_json.so 00:03:39.008 SO libspdk_rdma_provider.so.6.0 00:03:39.008 CC lib/idxd/idxd_kernel.o 00:03:39.008 SYMLINK libspdk_rdma_provider.so 00:03:39.008 CC lib/env_dpdk/pci_ioat.o 00:03:39.008 CC lib/env_dpdk/pci_virtio.o 00:03:39.008 CC lib/jsonrpc/jsonrpc_server.o 00:03:39.008 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:39.008 CC lib/env_dpdk/pci_vmd.o 00:03:39.008 LIB libspdk_idxd.a 00:03:39.008 CC lib/env_dpdk/pci_idxd.o 00:03:39.267 LIB libspdk_vmd.a 00:03:39.267 SO libspdk_idxd.so.12.1 00:03:39.267 SO libspdk_vmd.so.6.0 00:03:39.267 CC lib/jsonrpc/jsonrpc_client.o 00:03:39.267 CC lib/env_dpdk/pci_event.o 00:03:39.267 SYMLINK libspdk_idxd.so 00:03:39.267 CC lib/env_dpdk/sigbus_handler.o 00:03:39.267 SYMLINK libspdk_vmd.so 00:03:39.267 CC lib/env_dpdk/pci_dpdk.o 00:03:39.267 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:39.267 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:39.267 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:39.526 LIB libspdk_jsonrpc.a 00:03:39.526 SO libspdk_jsonrpc.so.6.0 00:03:39.526 SYMLINK libspdk_jsonrpc.so 00:03:39.785 CC lib/rpc/rpc.o 00:03:40.045 LIB libspdk_env_dpdk.a 00:03:40.045 SO libspdk_env_dpdk.so.15.1 00:03:40.045 LIB libspdk_rpc.a 00:03:40.045 SO libspdk_rpc.so.6.0 00:03:40.045 SYMLINK libspdk_env_dpdk.so 00:03:40.045 SYMLINK libspdk_rpc.so 00:03:40.303 CC lib/notify/notify.o 00:03:40.304 CC lib/notify/notify_rpc.o 00:03:40.304 CC lib/keyring/keyring.o 00:03:40.304 CC lib/keyring/keyring_rpc.o 00:03:40.304 CC lib/trace/trace.o 00:03:40.304 CC lib/trace/trace_flags.o 00:03:40.304 CC lib/trace/trace_rpc.o 00:03:40.563 LIB libspdk_notify.a 00:03:40.563 SO libspdk_notify.so.6.0 00:03:40.563 SYMLINK libspdk_notify.so 00:03:40.563 LIB libspdk_keyring.a 00:03:40.563 LIB libspdk_trace.a 00:03:40.822 SO libspdk_keyring.so.2.0 00:03:40.822 SO libspdk_trace.so.11.0 00:03:40.822 SYMLINK libspdk_keyring.so 00:03:40.822 SYMLINK libspdk_trace.so 00:03:41.086 CC lib/sock/sock.o 00:03:41.086 CC lib/sock/sock_rpc.o 00:03:41.086 CC lib/thread/thread.o 00:03:41.086 CC lib/thread/iobuf.o 00:03:41.657 LIB libspdk_sock.a 00:03:41.657 SO libspdk_sock.so.10.0 00:03:41.657 SYMLINK libspdk_sock.so 00:03:41.916 CC lib/nvme/nvme_ctrlr.o 00:03:41.916 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:41.916 CC lib/nvme/nvme_fabric.o 00:03:41.916 CC lib/nvme/nvme_ns_cmd.o 00:03:41.916 CC lib/nvme/nvme_ns.o 00:03:41.916 CC lib/nvme/nvme_pcie.o 00:03:41.916 CC lib/nvme/nvme_pcie_common.o 00:03:41.916 CC lib/nvme/nvme_qpair.o 00:03:41.916 CC lib/nvme/nvme.o 00:03:42.852 LIB libspdk_thread.a 00:03:42.852 SO libspdk_thread.so.11.0 00:03:42.852 SYMLINK libspdk_thread.so 00:03:42.852 CC lib/nvme/nvme_quirks.o 00:03:42.852 CC lib/nvme/nvme_transport.o 00:03:42.852 CC lib/nvme/nvme_discovery.o 00:03:42.852 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:42.852 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:42.852 CC lib/nvme/nvme_tcp.o 00:03:42.852 CC lib/nvme/nvme_opal.o 00:03:42.852 CC lib/nvme/nvme_io_msg.o 00:03:43.110 CC lib/nvme/nvme_poll_group.o 00:03:43.369 CC lib/nvme/nvme_zns.o 00:03:43.369 CC lib/nvme/nvme_stubs.o 00:03:43.369 CC lib/nvme/nvme_auth.o 00:03:43.369 CC lib/nvme/nvme_cuse.o 00:03:43.628 CC lib/accel/accel.o 00:03:43.628 CC lib/nvme/nvme_rdma.o 00:03:43.886 CC lib/blob/blobstore.o 00:03:43.886 CC lib/accel/accel_rpc.o 00:03:44.146 CC lib/accel/accel_sw.o 00:03:44.146 CC lib/init/json_config.o 00:03:44.146 CC lib/init/subsystem.o 00:03:44.404 CC lib/blob/request.o 00:03:44.404 CC lib/blob/zeroes.o 00:03:44.404 CC lib/blob/blob_bs_dev.o 00:03:44.404 CC lib/init/subsystem_rpc.o 00:03:44.404 CC lib/init/rpc.o 00:03:44.663 LIB libspdk_init.a 00:03:44.663 CC lib/virtio/virtio.o 00:03:44.663 CC lib/virtio/virtio_vhost_user.o 00:03:44.663 CC lib/virtio/virtio_vfio_user.o 00:03:44.663 CC lib/fsdev/fsdev.o 00:03:44.663 CC lib/fsdev/fsdev_io.o 00:03:44.663 SO libspdk_init.so.6.0 00:03:44.663 CC lib/fsdev/fsdev_rpc.o 00:03:44.663 SYMLINK libspdk_init.so 00:03:44.663 CC lib/virtio/virtio_pci.o 00:03:44.663 LIB libspdk_accel.a 00:03:44.663 SO libspdk_accel.so.16.0 00:03:44.922 SYMLINK libspdk_accel.so 00:03:44.922 LIB libspdk_nvme.a 00:03:44.922 CC lib/event/app.o 00:03:44.922 LIB libspdk_virtio.a 00:03:44.922 CC lib/event/app_rpc.o 00:03:44.922 CC lib/event/reactor.o 00:03:44.922 CC lib/event/log_rpc.o 00:03:44.922 CC lib/bdev/bdev.o 00:03:44.922 CC lib/bdev/bdev_rpc.o 00:03:45.180 SO libspdk_virtio.so.7.0 00:03:45.180 SYMLINK libspdk_virtio.so 00:03:45.180 CC lib/event/scheduler_static.o 00:03:45.180 SO libspdk_nvme.so.15.0 00:03:45.180 CC lib/bdev/bdev_zone.o 00:03:45.180 LIB libspdk_fsdev.a 00:03:45.180 CC lib/bdev/part.o 00:03:45.180 SO libspdk_fsdev.so.2.0 00:03:45.439 CC lib/bdev/scsi_nvme.o 00:03:45.439 SYMLINK libspdk_fsdev.so 00:03:45.439 SYMLINK libspdk_nvme.so 00:03:45.439 LIB libspdk_event.a 00:03:45.439 SO libspdk_event.so.14.0 00:03:45.439 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:45.698 SYMLINK libspdk_event.so 00:03:46.265 LIB libspdk_fuse_dispatcher.a 00:03:46.265 SO libspdk_fuse_dispatcher.so.1.0 00:03:46.265 SYMLINK libspdk_fuse_dispatcher.so 00:03:46.857 LIB libspdk_blob.a 00:03:46.857 SO libspdk_blob.so.11.0 00:03:46.857 SYMLINK libspdk_blob.so 00:03:47.121 CC lib/blobfs/blobfs.o 00:03:47.121 CC lib/blobfs/tree.o 00:03:47.121 CC lib/lvol/lvol.o 00:03:47.688 LIB libspdk_bdev.a 00:03:47.688 SO libspdk_bdev.so.17.0 00:03:47.947 SYMLINK libspdk_bdev.so 00:03:47.947 CC lib/nvmf/ctrlr.o 00:03:47.947 CC lib/nvmf/ctrlr_discovery.o 00:03:47.947 LIB libspdk_lvol.a 00:03:47.947 CC lib/nvmf/ctrlr_bdev.o 00:03:47.947 CC lib/nvmf/subsystem.o 00:03:47.947 CC lib/scsi/dev.o 00:03:47.947 CC lib/nbd/nbd.o 00:03:47.947 CC lib/ublk/ublk.o 00:03:47.947 CC lib/ftl/ftl_core.o 00:03:47.947 LIB libspdk_blobfs.a 00:03:47.947 SO libspdk_lvol.so.10.0 00:03:48.205 SO libspdk_blobfs.so.10.0 00:03:48.205 SYMLINK libspdk_lvol.so 00:03:48.205 CC lib/ftl/ftl_init.o 00:03:48.205 SYMLINK libspdk_blobfs.so 00:03:48.205 CC lib/ftl/ftl_layout.o 00:03:48.464 CC lib/scsi/lun.o 00:03:48.464 CC lib/ftl/ftl_debug.o 00:03:48.464 CC lib/nbd/nbd_rpc.o 00:03:48.464 CC lib/ftl/ftl_io.o 00:03:48.464 CC lib/ftl/ftl_sb.o 00:03:48.464 CC lib/ftl/ftl_l2p.o 00:03:48.722 CC lib/ftl/ftl_l2p_flat.o 00:03:48.722 LIB libspdk_nbd.a 00:03:48.722 SO libspdk_nbd.so.7.0 00:03:48.722 CC lib/scsi/port.o 00:03:48.722 CC lib/ublk/ublk_rpc.o 00:03:48.722 CC lib/nvmf/nvmf.o 00:03:48.722 CC lib/ftl/ftl_nv_cache.o 00:03:48.722 CC lib/ftl/ftl_band.o 00:03:48.722 SYMLINK libspdk_nbd.so 00:03:48.722 CC lib/ftl/ftl_band_ops.o 00:03:48.722 CC lib/ftl/ftl_writer.o 00:03:48.722 CC lib/scsi/scsi.o 00:03:48.981 CC lib/nvmf/nvmf_rpc.o 00:03:48.981 LIB libspdk_ublk.a 00:03:48.981 SO libspdk_ublk.so.3.0 00:03:48.981 SYMLINK libspdk_ublk.so 00:03:48.981 CC lib/ftl/ftl_rq.o 00:03:48.981 CC lib/scsi/scsi_bdev.o 00:03:48.981 CC lib/ftl/ftl_reloc.o 00:03:48.981 CC lib/ftl/ftl_l2p_cache.o 00:03:49.239 CC lib/scsi/scsi_pr.o 00:03:49.239 CC lib/ftl/ftl_p2l.o 00:03:49.239 CC lib/nvmf/transport.o 00:03:49.497 CC lib/ftl/ftl_p2l_log.o 00:03:49.498 CC lib/scsi/scsi_rpc.o 00:03:49.498 CC lib/nvmf/tcp.o 00:03:49.498 CC lib/nvmf/stubs.o 00:03:49.498 CC lib/scsi/task.o 00:03:49.756 CC lib/ftl/mngt/ftl_mngt.o 00:03:49.756 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:49.756 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:49.756 CC lib/nvmf/mdns_server.o 00:03:49.756 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:49.756 CC lib/nvmf/rdma.o 00:03:49.756 LIB libspdk_scsi.a 00:03:49.756 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:50.014 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:50.014 SO libspdk_scsi.so.9.0 00:03:50.014 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:50.014 CC lib/nvmf/auth.o 00:03:50.014 SYMLINK libspdk_scsi.so 00:03:50.014 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:50.014 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:50.273 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:50.273 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:50.273 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:50.273 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:50.273 CC lib/iscsi/conn.o 00:03:50.273 CC lib/ftl/utils/ftl_conf.o 00:03:50.273 CC lib/vhost/vhost.o 00:03:50.532 CC lib/iscsi/init_grp.o 00:03:50.532 CC lib/iscsi/iscsi.o 00:03:50.532 CC lib/iscsi/param.o 00:03:50.532 CC lib/iscsi/portal_grp.o 00:03:50.791 CC lib/ftl/utils/ftl_md.o 00:03:50.791 CC lib/ftl/utils/ftl_mempool.o 00:03:50.791 CC lib/ftl/utils/ftl_bitmap.o 00:03:50.791 CC lib/vhost/vhost_rpc.o 00:03:50.791 CC lib/vhost/vhost_scsi.o 00:03:50.791 CC lib/ftl/utils/ftl_property.o 00:03:51.050 CC lib/iscsi/tgt_node.o 00:03:51.050 CC lib/iscsi/iscsi_subsystem.o 00:03:51.050 CC lib/vhost/vhost_blk.o 00:03:51.050 CC lib/vhost/rte_vhost_user.o 00:03:51.050 CC lib/iscsi/iscsi_rpc.o 00:03:51.050 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:51.309 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:51.309 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:51.568 CC lib/iscsi/task.o 00:03:51.568 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:51.568 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:51.568 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:51.568 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:51.568 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:51.826 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:51.826 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:51.826 LIB libspdk_nvmf.a 00:03:51.826 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:51.826 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:51.826 LIB libspdk_iscsi.a 00:03:51.826 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:51.826 CC lib/ftl/base/ftl_base_dev.o 00:03:51.826 SO libspdk_nvmf.so.20.0 00:03:51.826 SO libspdk_iscsi.so.8.0 00:03:52.085 CC lib/ftl/base/ftl_base_bdev.o 00:03:52.085 CC lib/ftl/ftl_trace.o 00:03:52.085 SYMLINK libspdk_iscsi.so 00:03:52.085 SYMLINK libspdk_nvmf.so 00:03:52.085 LIB libspdk_vhost.a 00:03:52.343 SO libspdk_vhost.so.8.0 00:03:52.343 LIB libspdk_ftl.a 00:03:52.343 SYMLINK libspdk_vhost.so 00:03:52.601 SO libspdk_ftl.so.9.0 00:03:52.859 SYMLINK libspdk_ftl.so 00:03:53.117 CC module/env_dpdk/env_dpdk_rpc.o 00:03:53.117 CC module/accel/error/accel_error.o 00:03:53.117 CC module/sock/posix/posix.o 00:03:53.117 CC module/accel/dsa/accel_dsa.o 00:03:53.117 CC module/keyring/file/keyring.o 00:03:53.117 CC module/blob/bdev/blob_bdev.o 00:03:53.117 CC module/sock/uring/uring.o 00:03:53.117 CC module/accel/ioat/accel_ioat.o 00:03:53.117 CC module/fsdev/aio/fsdev_aio.o 00:03:53.117 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:53.376 LIB libspdk_env_dpdk_rpc.a 00:03:53.376 SO libspdk_env_dpdk_rpc.so.6.0 00:03:53.376 CC module/keyring/file/keyring_rpc.o 00:03:53.376 CC module/accel/error/accel_error_rpc.o 00:03:53.376 SYMLINK libspdk_env_dpdk_rpc.so 00:03:53.376 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:53.376 CC module/accel/ioat/accel_ioat_rpc.o 00:03:53.376 LIB libspdk_scheduler_dynamic.a 00:03:53.376 SO libspdk_scheduler_dynamic.so.4.0 00:03:53.376 LIB libspdk_blob_bdev.a 00:03:53.376 CC module/accel/dsa/accel_dsa_rpc.o 00:03:53.634 SO libspdk_blob_bdev.so.11.0 00:03:53.634 LIB libspdk_keyring_file.a 00:03:53.634 SYMLINK libspdk_scheduler_dynamic.so 00:03:53.634 LIB libspdk_accel_error.a 00:03:53.634 SO libspdk_keyring_file.so.2.0 00:03:53.634 LIB libspdk_accel_ioat.a 00:03:53.634 SO libspdk_accel_error.so.2.0 00:03:53.634 CC module/fsdev/aio/linux_aio_mgr.o 00:03:53.634 SYMLINK libspdk_blob_bdev.so 00:03:53.634 SO libspdk_accel_ioat.so.6.0 00:03:53.634 SYMLINK libspdk_keyring_file.so 00:03:53.634 SYMLINK libspdk_accel_error.so 00:03:53.634 LIB libspdk_accel_dsa.a 00:03:53.634 SYMLINK libspdk_accel_ioat.so 00:03:53.634 SO libspdk_accel_dsa.so.5.0 00:03:53.634 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:53.634 CC module/keyring/linux/keyring.o 00:03:53.634 SYMLINK libspdk_accel_dsa.so 00:03:53.892 CC module/accel/iaa/accel_iaa.o 00:03:53.892 LIB libspdk_scheduler_dpdk_governor.a 00:03:53.892 LIB libspdk_fsdev_aio.a 00:03:53.892 CC module/bdev/delay/vbdev_delay.o 00:03:53.892 CC module/keyring/linux/keyring_rpc.o 00:03:53.892 CC module/blobfs/bdev/blobfs_bdev.o 00:03:53.892 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:53.892 LIB libspdk_sock_uring.a 00:03:53.892 CC module/scheduler/gscheduler/gscheduler.o 00:03:53.892 SO libspdk_fsdev_aio.so.1.0 00:03:53.892 CC module/bdev/error/vbdev_error.o 00:03:53.892 LIB libspdk_sock_posix.a 00:03:53.892 SO libspdk_sock_uring.so.5.0 00:03:53.892 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:53.892 CC module/bdev/error/vbdev_error_rpc.o 00:03:53.892 SO libspdk_sock_posix.so.6.0 00:03:53.892 SYMLINK libspdk_fsdev_aio.so 00:03:53.892 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:54.150 SYMLINK libspdk_sock_uring.so 00:03:54.150 LIB libspdk_keyring_linux.a 00:03:54.150 SYMLINK libspdk_sock_posix.so 00:03:54.150 CC module/accel/iaa/accel_iaa_rpc.o 00:03:54.150 SO libspdk_keyring_linux.so.1.0 00:03:54.150 LIB libspdk_scheduler_gscheduler.a 00:03:54.150 SO libspdk_scheduler_gscheduler.so.4.0 00:03:54.150 SYMLINK libspdk_keyring_linux.so 00:03:54.150 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:54.150 SYMLINK libspdk_scheduler_gscheduler.so 00:03:54.150 LIB libspdk_blobfs_bdev.a 00:03:54.150 LIB libspdk_accel_iaa.a 00:03:54.150 CC module/bdev/gpt/gpt.o 00:03:54.150 LIB libspdk_bdev_error.a 00:03:54.150 CC module/bdev/lvol/vbdev_lvol.o 00:03:54.150 SO libspdk_blobfs_bdev.so.6.0 00:03:54.150 SO libspdk_accel_iaa.so.3.0 00:03:54.150 SO libspdk_bdev_error.so.6.0 00:03:54.150 CC module/bdev/malloc/bdev_malloc.o 00:03:54.408 SYMLINK libspdk_blobfs_bdev.so 00:03:54.408 SYMLINK libspdk_accel_iaa.so 00:03:54.408 CC module/bdev/null/bdev_null.o 00:03:54.408 SYMLINK libspdk_bdev_error.so 00:03:54.408 CC module/bdev/null/bdev_null_rpc.o 00:03:54.408 CC module/bdev/nvme/bdev_nvme.o 00:03:54.408 LIB libspdk_bdev_delay.a 00:03:54.408 SO libspdk_bdev_delay.so.6.0 00:03:54.408 CC module/bdev/gpt/vbdev_gpt.o 00:03:54.408 SYMLINK libspdk_bdev_delay.so 00:03:54.408 CC module/bdev/passthru/vbdev_passthru.o 00:03:54.408 CC module/bdev/raid/bdev_raid.o 00:03:54.408 CC module/bdev/split/vbdev_split.o 00:03:54.408 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:54.666 LIB libspdk_bdev_null.a 00:03:54.666 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:54.666 SO libspdk_bdev_null.so.6.0 00:03:54.666 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:54.666 LIB libspdk_bdev_gpt.a 00:03:54.666 SYMLINK libspdk_bdev_null.so 00:03:54.666 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:54.666 CC module/bdev/split/vbdev_split_rpc.o 00:03:54.666 SO libspdk_bdev_gpt.so.6.0 00:03:54.666 CC module/bdev/nvme/nvme_rpc.o 00:03:54.666 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:54.925 SYMLINK libspdk_bdev_gpt.so 00:03:54.925 CC module/bdev/nvme/bdev_mdns_client.o 00:03:54.925 LIB libspdk_bdev_malloc.a 00:03:54.925 SO libspdk_bdev_malloc.so.6.0 00:03:54.925 LIB libspdk_bdev_lvol.a 00:03:54.925 SO libspdk_bdev_lvol.so.6.0 00:03:54.925 LIB libspdk_bdev_split.a 00:03:54.925 SYMLINK libspdk_bdev_malloc.so 00:03:54.925 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:54.925 CC module/bdev/nvme/vbdev_opal.o 00:03:54.925 LIB libspdk_bdev_passthru.a 00:03:54.925 SO libspdk_bdev_split.so.6.0 00:03:54.925 SYMLINK libspdk_bdev_lvol.so 00:03:54.925 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:54.925 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:54.925 SO libspdk_bdev_passthru.so.6.0 00:03:55.183 SYMLINK libspdk_bdev_split.so 00:03:55.183 SYMLINK libspdk_bdev_passthru.so 00:03:55.183 LIB libspdk_bdev_zone_block.a 00:03:55.183 SO libspdk_bdev_zone_block.so.6.0 00:03:55.183 CC module/bdev/uring/bdev_uring.o 00:03:55.183 CC module/bdev/raid/bdev_raid_rpc.o 00:03:55.183 SYMLINK libspdk_bdev_zone_block.so 00:03:55.183 CC module/bdev/raid/bdev_raid_sb.o 00:03:55.183 CC module/bdev/aio/bdev_aio.o 00:03:55.183 CC module/bdev/uring/bdev_uring_rpc.o 00:03:55.183 CC module/bdev/ftl/bdev_ftl.o 00:03:55.441 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:55.441 CC module/bdev/iscsi/bdev_iscsi.o 00:03:55.441 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:55.441 CC module/bdev/aio/bdev_aio_rpc.o 00:03:55.441 CC module/bdev/raid/raid0.o 00:03:55.441 CC module/bdev/raid/raid1.o 00:03:55.441 LIB libspdk_bdev_uring.a 00:03:55.700 LIB libspdk_bdev_ftl.a 00:03:55.700 CC module/bdev/raid/concat.o 00:03:55.700 SO libspdk_bdev_uring.so.6.0 00:03:55.700 SO libspdk_bdev_ftl.so.6.0 00:03:55.700 LIB libspdk_bdev_aio.a 00:03:55.700 SO libspdk_bdev_aio.so.6.0 00:03:55.700 SYMLINK libspdk_bdev_uring.so 00:03:55.700 SYMLINK libspdk_bdev_ftl.so 00:03:55.700 SYMLINK libspdk_bdev_aio.so 00:03:55.700 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:55.700 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:55.700 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:55.958 LIB libspdk_bdev_iscsi.a 00:03:55.958 LIB libspdk_bdev_raid.a 00:03:55.958 SO libspdk_bdev_iscsi.so.6.0 00:03:55.958 SO libspdk_bdev_raid.so.6.0 00:03:55.958 SYMLINK libspdk_bdev_iscsi.so 00:03:55.958 SYMLINK libspdk_bdev_raid.so 00:03:56.217 LIB libspdk_bdev_virtio.a 00:03:56.217 SO libspdk_bdev_virtio.so.6.0 00:03:56.475 SYMLINK libspdk_bdev_virtio.so 00:03:56.734 LIB libspdk_bdev_nvme.a 00:03:56.734 SO libspdk_bdev_nvme.so.7.1 00:03:56.992 SYMLINK libspdk_bdev_nvme.so 00:03:57.558 CC module/event/subsystems/fsdev/fsdev.o 00:03:57.558 CC module/event/subsystems/scheduler/scheduler.o 00:03:57.558 CC module/event/subsystems/sock/sock.o 00:03:57.558 CC module/event/subsystems/iobuf/iobuf.o 00:03:57.558 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:57.558 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:57.558 CC module/event/subsystems/keyring/keyring.o 00:03:57.558 CC module/event/subsystems/vmd/vmd.o 00:03:57.558 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:57.558 LIB libspdk_event_sock.a 00:03:57.558 LIB libspdk_event_vhost_blk.a 00:03:57.558 LIB libspdk_event_scheduler.a 00:03:57.558 LIB libspdk_event_keyring.a 00:03:57.558 LIB libspdk_event_fsdev.a 00:03:57.558 SO libspdk_event_sock.so.5.0 00:03:57.558 SO libspdk_event_vhost_blk.so.3.0 00:03:57.558 LIB libspdk_event_vmd.a 00:03:57.558 SO libspdk_event_scheduler.so.4.0 00:03:57.558 LIB libspdk_event_iobuf.a 00:03:57.558 SO libspdk_event_keyring.so.1.0 00:03:57.558 SO libspdk_event_fsdev.so.1.0 00:03:57.558 SO libspdk_event_vmd.so.6.0 00:03:57.558 SO libspdk_event_iobuf.so.3.0 00:03:57.558 SYMLINK libspdk_event_sock.so 00:03:57.558 SYMLINK libspdk_event_vhost_blk.so 00:03:57.558 SYMLINK libspdk_event_keyring.so 00:03:57.558 SYMLINK libspdk_event_fsdev.so 00:03:57.558 SYMLINK libspdk_event_scheduler.so 00:03:57.817 SYMLINK libspdk_event_vmd.so 00:03:57.817 SYMLINK libspdk_event_iobuf.so 00:03:57.817 CC module/event/subsystems/accel/accel.o 00:03:58.076 LIB libspdk_event_accel.a 00:03:58.076 SO libspdk_event_accel.so.6.0 00:03:58.076 SYMLINK libspdk_event_accel.so 00:03:58.335 CC module/event/subsystems/bdev/bdev.o 00:03:58.595 LIB libspdk_event_bdev.a 00:03:58.595 SO libspdk_event_bdev.so.6.0 00:03:58.853 SYMLINK libspdk_event_bdev.so 00:03:58.853 CC module/event/subsystems/nbd/nbd.o 00:03:58.853 CC module/event/subsystems/scsi/scsi.o 00:03:58.853 CC module/event/subsystems/ublk/ublk.o 00:03:58.853 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:58.853 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:59.112 LIB libspdk_event_nbd.a 00:03:59.112 LIB libspdk_event_ublk.a 00:03:59.112 LIB libspdk_event_scsi.a 00:03:59.112 SO libspdk_event_ublk.so.3.0 00:03:59.112 SO libspdk_event_nbd.so.6.0 00:03:59.112 SO libspdk_event_scsi.so.6.0 00:03:59.112 SYMLINK libspdk_event_ublk.so 00:03:59.112 SYMLINK libspdk_event_nbd.so 00:03:59.112 SYMLINK libspdk_event_scsi.so 00:03:59.371 LIB libspdk_event_nvmf.a 00:03:59.371 SO libspdk_event_nvmf.so.6.0 00:03:59.371 SYMLINK libspdk_event_nvmf.so 00:03:59.371 CC module/event/subsystems/iscsi/iscsi.o 00:03:59.371 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:59.630 LIB libspdk_event_vhost_scsi.a 00:03:59.630 LIB libspdk_event_iscsi.a 00:03:59.630 SO libspdk_event_vhost_scsi.so.3.0 00:03:59.630 SO libspdk_event_iscsi.so.6.0 00:03:59.630 SYMLINK libspdk_event_vhost_scsi.so 00:03:59.889 SYMLINK libspdk_event_iscsi.so 00:03:59.889 SO libspdk.so.6.0 00:03:59.889 SYMLINK libspdk.so 00:04:00.148 CC test/rpc_client/rpc_client_test.o 00:04:00.148 CXX app/trace/trace.o 00:04:00.148 TEST_HEADER include/spdk/accel.h 00:04:00.148 TEST_HEADER include/spdk/accel_module.h 00:04:00.148 TEST_HEADER include/spdk/assert.h 00:04:00.148 TEST_HEADER include/spdk/barrier.h 00:04:00.148 TEST_HEADER include/spdk/base64.h 00:04:00.148 TEST_HEADER include/spdk/bdev.h 00:04:00.148 TEST_HEADER include/spdk/bdev_module.h 00:04:00.148 TEST_HEADER include/spdk/bdev_zone.h 00:04:00.148 TEST_HEADER include/spdk/bit_array.h 00:04:00.148 TEST_HEADER include/spdk/bit_pool.h 00:04:00.148 TEST_HEADER include/spdk/blob_bdev.h 00:04:00.148 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:00.148 TEST_HEADER include/spdk/blobfs.h 00:04:00.148 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:00.148 TEST_HEADER include/spdk/blob.h 00:04:00.148 TEST_HEADER include/spdk/conf.h 00:04:00.148 TEST_HEADER include/spdk/config.h 00:04:00.148 TEST_HEADER include/spdk/cpuset.h 00:04:00.148 TEST_HEADER include/spdk/crc16.h 00:04:00.148 TEST_HEADER include/spdk/crc32.h 00:04:00.148 TEST_HEADER include/spdk/crc64.h 00:04:00.148 TEST_HEADER include/spdk/dif.h 00:04:00.148 TEST_HEADER include/spdk/dma.h 00:04:00.148 TEST_HEADER include/spdk/endian.h 00:04:00.148 TEST_HEADER include/spdk/env_dpdk.h 00:04:00.148 TEST_HEADER include/spdk/env.h 00:04:00.148 TEST_HEADER include/spdk/event.h 00:04:00.148 TEST_HEADER include/spdk/fd_group.h 00:04:00.148 TEST_HEADER include/spdk/fd.h 00:04:00.148 TEST_HEADER include/spdk/file.h 00:04:00.148 TEST_HEADER include/spdk/fsdev.h 00:04:00.148 TEST_HEADER include/spdk/fsdev_module.h 00:04:00.148 TEST_HEADER include/spdk/ftl.h 00:04:00.148 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:00.148 TEST_HEADER include/spdk/gpt_spec.h 00:04:00.148 TEST_HEADER include/spdk/hexlify.h 00:04:00.148 CC examples/util/zipf/zipf.o 00:04:00.148 CC test/thread/poller_perf/poller_perf.o 00:04:00.148 TEST_HEADER include/spdk/histogram_data.h 00:04:00.148 TEST_HEADER include/spdk/idxd.h 00:04:00.148 TEST_HEADER include/spdk/idxd_spec.h 00:04:00.148 CC examples/ioat/perf/perf.o 00:04:00.148 TEST_HEADER include/spdk/init.h 00:04:00.148 TEST_HEADER include/spdk/ioat.h 00:04:00.148 TEST_HEADER include/spdk/ioat_spec.h 00:04:00.148 TEST_HEADER include/spdk/iscsi_spec.h 00:04:00.407 TEST_HEADER include/spdk/json.h 00:04:00.407 TEST_HEADER include/spdk/jsonrpc.h 00:04:00.407 TEST_HEADER include/spdk/keyring.h 00:04:00.407 TEST_HEADER include/spdk/keyring_module.h 00:04:00.407 TEST_HEADER include/spdk/likely.h 00:04:00.407 TEST_HEADER include/spdk/log.h 00:04:00.407 TEST_HEADER include/spdk/lvol.h 00:04:00.407 TEST_HEADER include/spdk/md5.h 00:04:00.407 TEST_HEADER include/spdk/memory.h 00:04:00.407 TEST_HEADER include/spdk/mmio.h 00:04:00.407 TEST_HEADER include/spdk/nbd.h 00:04:00.407 TEST_HEADER include/spdk/net.h 00:04:00.407 TEST_HEADER include/spdk/notify.h 00:04:00.407 TEST_HEADER include/spdk/nvme.h 00:04:00.407 TEST_HEADER include/spdk/nvme_intel.h 00:04:00.407 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:00.407 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:00.407 CC test/dma/test_dma/test_dma.o 00:04:00.407 TEST_HEADER include/spdk/nvme_spec.h 00:04:00.407 TEST_HEADER include/spdk/nvme_zns.h 00:04:00.407 CC test/app/bdev_svc/bdev_svc.o 00:04:00.407 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:00.407 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:00.407 TEST_HEADER include/spdk/nvmf.h 00:04:00.407 TEST_HEADER include/spdk/nvmf_spec.h 00:04:00.407 TEST_HEADER include/spdk/nvmf_transport.h 00:04:00.407 TEST_HEADER include/spdk/opal.h 00:04:00.407 TEST_HEADER include/spdk/opal_spec.h 00:04:00.407 TEST_HEADER include/spdk/pci_ids.h 00:04:00.407 TEST_HEADER include/spdk/pipe.h 00:04:00.407 TEST_HEADER include/spdk/queue.h 00:04:00.407 TEST_HEADER include/spdk/reduce.h 00:04:00.407 TEST_HEADER include/spdk/rpc.h 00:04:00.407 TEST_HEADER include/spdk/scheduler.h 00:04:00.407 TEST_HEADER include/spdk/scsi.h 00:04:00.407 TEST_HEADER include/spdk/scsi_spec.h 00:04:00.407 TEST_HEADER include/spdk/sock.h 00:04:00.407 CC test/env/mem_callbacks/mem_callbacks.o 00:04:00.407 TEST_HEADER include/spdk/stdinc.h 00:04:00.407 TEST_HEADER include/spdk/string.h 00:04:00.407 TEST_HEADER include/spdk/thread.h 00:04:00.407 TEST_HEADER include/spdk/trace.h 00:04:00.407 TEST_HEADER include/spdk/trace_parser.h 00:04:00.407 TEST_HEADER include/spdk/tree.h 00:04:00.407 LINK rpc_client_test 00:04:00.407 TEST_HEADER include/spdk/ublk.h 00:04:00.407 TEST_HEADER include/spdk/util.h 00:04:00.407 TEST_HEADER include/spdk/uuid.h 00:04:00.407 TEST_HEADER include/spdk/version.h 00:04:00.407 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:00.407 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:00.407 TEST_HEADER include/spdk/vhost.h 00:04:00.407 TEST_HEADER include/spdk/vmd.h 00:04:00.407 TEST_HEADER include/spdk/xor.h 00:04:00.407 TEST_HEADER include/spdk/zipf.h 00:04:00.407 CXX test/cpp_headers/accel.o 00:04:00.407 LINK interrupt_tgt 00:04:00.407 LINK zipf 00:04:00.407 LINK poller_perf 00:04:00.407 LINK ioat_perf 00:04:00.665 LINK bdev_svc 00:04:00.665 CXX test/cpp_headers/accel_module.o 00:04:00.665 CXX test/cpp_headers/assert.o 00:04:00.665 CXX test/cpp_headers/barrier.o 00:04:00.665 LINK spdk_trace 00:04:00.665 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:00.665 CC examples/ioat/verify/verify.o 00:04:00.924 CXX test/cpp_headers/base64.o 00:04:00.924 CC test/event/event_perf/event_perf.o 00:04:00.924 CC test/event/reactor/reactor.o 00:04:00.924 LINK test_dma 00:04:00.924 CC app/trace_record/trace_record.o 00:04:00.924 LINK verify 00:04:00.924 CC examples/sock/hello_world/hello_sock.o 00:04:00.924 CXX test/cpp_headers/bdev.o 00:04:00.924 CC examples/thread/thread/thread_ex.o 00:04:00.924 LINK event_perf 00:04:00.924 LINK reactor 00:04:00.924 LINK mem_callbacks 00:04:01.182 LINK nvme_fuzz 00:04:01.182 CC test/app/jsoncat/jsoncat.o 00:04:01.182 CXX test/cpp_headers/bdev_module.o 00:04:01.182 CC test/app/histogram_perf/histogram_perf.o 00:04:01.182 LINK spdk_trace_record 00:04:01.182 LINK hello_sock 00:04:01.182 CC test/env/vtophys/vtophys.o 00:04:01.182 CC test/event/reactor_perf/reactor_perf.o 00:04:01.182 CC test/app/stub/stub.o 00:04:01.182 LINK thread 00:04:01.440 LINK jsoncat 00:04:01.440 LINK histogram_perf 00:04:01.440 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:01.440 LINK vtophys 00:04:01.440 LINK reactor_perf 00:04:01.440 CXX test/cpp_headers/bdev_zone.o 00:04:01.440 LINK stub 00:04:01.440 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:01.440 CC app/nvmf_tgt/nvmf_main.o 00:04:01.440 CXX test/cpp_headers/bit_array.o 00:04:01.440 CXX test/cpp_headers/bit_pool.o 00:04:01.698 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:01.698 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:01.698 CC test/event/app_repeat/app_repeat.o 00:04:01.698 CXX test/cpp_headers/blob_bdev.o 00:04:01.698 CC examples/vmd/lsvmd/lsvmd.o 00:04:01.698 LINK nvmf_tgt 00:04:01.698 CC examples/vmd/led/led.o 00:04:01.955 LINK app_repeat 00:04:01.955 LINK lsvmd 00:04:01.955 LINK led 00:04:01.955 CC test/accel/dif/dif.o 00:04:01.955 LINK env_dpdk_post_init 00:04:01.955 CC test/blobfs/mkfs/mkfs.o 00:04:01.955 CXX test/cpp_headers/blobfs_bdev.o 00:04:01.955 CXX test/cpp_headers/blobfs.o 00:04:01.955 LINK vhost_fuzz 00:04:01.955 CC app/iscsi_tgt/iscsi_tgt.o 00:04:02.213 CC test/env/memory/memory_ut.o 00:04:02.213 LINK mkfs 00:04:02.213 CC test/event/scheduler/scheduler.o 00:04:02.213 CXX test/cpp_headers/blob.o 00:04:02.213 CXX test/cpp_headers/conf.o 00:04:02.213 CC examples/idxd/perf/perf.o 00:04:02.213 LINK iscsi_tgt 00:04:02.213 CXX test/cpp_headers/config.o 00:04:02.471 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:02.471 CXX test/cpp_headers/cpuset.o 00:04:02.471 LINK scheduler 00:04:02.471 LINK dif 00:04:02.471 CXX test/cpp_headers/crc16.o 00:04:02.471 LINK idxd_perf 00:04:02.471 CC examples/accel/perf/accel_perf.o 00:04:02.730 LINK hello_fsdev 00:04:02.730 CC app/spdk_tgt/spdk_tgt.o 00:04:02.730 CC test/lvol/esnap/esnap.o 00:04:02.730 CXX test/cpp_headers/crc32.o 00:04:02.730 CC test/nvme/aer/aer.o 00:04:02.730 CC test/nvme/reset/reset.o 00:04:02.730 CC test/nvme/sgl/sgl.o 00:04:02.988 LINK spdk_tgt 00:04:02.988 CC test/nvme/e2edp/nvme_dp.o 00:04:02.988 CXX test/cpp_headers/crc64.o 00:04:02.988 LINK iscsi_fuzz 00:04:02.988 LINK accel_perf 00:04:02.988 LINK aer 00:04:02.988 LINK reset 00:04:03.246 CXX test/cpp_headers/dif.o 00:04:03.246 LINK sgl 00:04:03.246 CC app/spdk_lspci/spdk_lspci.o 00:04:03.246 LINK nvme_dp 00:04:03.246 CXX test/cpp_headers/dma.o 00:04:03.246 CC test/nvme/overhead/overhead.o 00:04:03.246 CC test/nvme/err_injection/err_injection.o 00:04:03.246 LINK spdk_lspci 00:04:03.246 LINK memory_ut 00:04:03.505 CC test/env/pci/pci_ut.o 00:04:03.505 CC examples/nvme/hello_world/hello_world.o 00:04:03.505 CC test/nvme/startup/startup.o 00:04:03.505 CC examples/blob/hello_world/hello_blob.o 00:04:03.505 CXX test/cpp_headers/endian.o 00:04:03.505 LINK err_injection 00:04:03.505 CC app/spdk_nvme_perf/perf.o 00:04:03.763 LINK overhead 00:04:03.763 CC examples/blob/cli/blobcli.o 00:04:03.763 LINK startup 00:04:03.763 LINK hello_world 00:04:03.763 CXX test/cpp_headers/env_dpdk.o 00:04:03.763 LINK hello_blob 00:04:03.763 LINK pci_ut 00:04:03.763 CXX test/cpp_headers/env.o 00:04:03.763 CC app/spdk_nvme_identify/identify.o 00:04:04.021 CC test/nvme/reserve/reserve.o 00:04:04.021 CC examples/nvme/reconnect/reconnect.o 00:04:04.021 CC test/bdev/bdevio/bdevio.o 00:04:04.021 CXX test/cpp_headers/event.o 00:04:04.021 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:04.021 CXX test/cpp_headers/fd_group.o 00:04:04.021 LINK reserve 00:04:04.279 LINK blobcli 00:04:04.279 CXX test/cpp_headers/fd.o 00:04:04.279 CXX test/cpp_headers/file.o 00:04:04.279 CC test/nvme/simple_copy/simple_copy.o 00:04:04.279 LINK reconnect 00:04:04.279 LINK bdevio 00:04:04.280 CXX test/cpp_headers/fsdev.o 00:04:04.537 LINK spdk_nvme_perf 00:04:04.537 CC app/spdk_nvme_discover/discovery_aer.o 00:04:04.537 CC app/spdk_top/spdk_top.o 00:04:04.537 LINK simple_copy 00:04:04.537 CXX test/cpp_headers/fsdev_module.o 00:04:04.537 LINK nvme_manage 00:04:04.537 CC test/nvme/connect_stress/connect_stress.o 00:04:04.796 CC examples/bdev/hello_world/hello_bdev.o 00:04:04.796 LINK spdk_nvme_identify 00:04:04.796 LINK spdk_nvme_discover 00:04:04.796 CXX test/cpp_headers/ftl.o 00:04:04.796 CC test/nvme/boot_partition/boot_partition.o 00:04:04.796 CC app/vhost/vhost.o 00:04:04.796 CC examples/nvme/arbitration/arbitration.o 00:04:04.796 LINK connect_stress 00:04:05.054 LINK hello_bdev 00:04:05.054 CXX test/cpp_headers/fuse_dispatcher.o 00:04:05.054 LINK boot_partition 00:04:05.054 LINK vhost 00:04:05.054 CC test/nvme/compliance/nvme_compliance.o 00:04:05.054 CC app/spdk_dd/spdk_dd.o 00:04:05.054 CC test/nvme/fused_ordering/fused_ordering.o 00:04:05.054 CXX test/cpp_headers/gpt_spec.o 00:04:05.054 LINK arbitration 00:04:05.313 CC examples/nvme/hotplug/hotplug.o 00:04:05.313 CC examples/bdev/bdevperf/bdevperf.o 00:04:05.313 LINK fused_ordering 00:04:05.313 CXX test/cpp_headers/hexlify.o 00:04:05.313 LINK nvme_compliance 00:04:05.313 CC app/fio/nvme/fio_plugin.o 00:04:05.313 CC app/fio/bdev/fio_plugin.o 00:04:05.313 LINK spdk_top 00:04:05.571 LINK hotplug 00:04:05.571 LINK spdk_dd 00:04:05.571 CXX test/cpp_headers/histogram_data.o 00:04:05.571 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:05.571 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:05.571 CXX test/cpp_headers/idxd.o 00:04:05.571 CC examples/nvme/abort/abort.o 00:04:05.829 CXX test/cpp_headers/idxd_spec.o 00:04:05.829 LINK cmb_copy 00:04:05.829 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:05.829 LINK doorbell_aers 00:04:05.829 CC test/nvme/fdp/fdp.o 00:04:05.829 LINK spdk_nvme 00:04:05.829 CXX test/cpp_headers/init.o 00:04:05.829 LINK spdk_bdev 00:04:05.829 CXX test/cpp_headers/ioat.o 00:04:06.087 LINK pmr_persistence 00:04:06.087 CC test/nvme/cuse/cuse.o 00:04:06.087 CXX test/cpp_headers/ioat_spec.o 00:04:06.087 CXX test/cpp_headers/iscsi_spec.o 00:04:06.087 LINK abort 00:04:06.087 LINK bdevperf 00:04:06.087 CXX test/cpp_headers/json.o 00:04:06.087 CXX test/cpp_headers/jsonrpc.o 00:04:06.087 LINK fdp 00:04:06.087 CXX test/cpp_headers/keyring.o 00:04:06.087 CXX test/cpp_headers/keyring_module.o 00:04:06.345 CXX test/cpp_headers/likely.o 00:04:06.345 CXX test/cpp_headers/log.o 00:04:06.345 CXX test/cpp_headers/lvol.o 00:04:06.345 CXX test/cpp_headers/md5.o 00:04:06.345 CXX test/cpp_headers/memory.o 00:04:06.345 CXX test/cpp_headers/mmio.o 00:04:06.345 CXX test/cpp_headers/nbd.o 00:04:06.345 CXX test/cpp_headers/net.o 00:04:06.345 CXX test/cpp_headers/notify.o 00:04:06.345 CXX test/cpp_headers/nvme.o 00:04:06.345 CXX test/cpp_headers/nvme_intel.o 00:04:06.345 CXX test/cpp_headers/nvme_ocssd.o 00:04:06.603 CC examples/nvmf/nvmf/nvmf.o 00:04:06.603 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:06.603 CXX test/cpp_headers/nvme_spec.o 00:04:06.603 CXX test/cpp_headers/nvme_zns.o 00:04:06.603 CXX test/cpp_headers/nvmf_cmd.o 00:04:06.603 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:06.603 CXX test/cpp_headers/nvmf.o 00:04:06.603 CXX test/cpp_headers/nvmf_spec.o 00:04:06.603 CXX test/cpp_headers/nvmf_transport.o 00:04:06.603 CXX test/cpp_headers/opal.o 00:04:06.862 CXX test/cpp_headers/opal_spec.o 00:04:06.862 CXX test/cpp_headers/pci_ids.o 00:04:06.862 LINK nvmf 00:04:06.862 CXX test/cpp_headers/pipe.o 00:04:06.862 CXX test/cpp_headers/queue.o 00:04:06.862 CXX test/cpp_headers/reduce.o 00:04:06.862 CXX test/cpp_headers/rpc.o 00:04:06.862 CXX test/cpp_headers/scheduler.o 00:04:06.862 CXX test/cpp_headers/scsi.o 00:04:06.862 CXX test/cpp_headers/scsi_spec.o 00:04:06.862 CXX test/cpp_headers/sock.o 00:04:06.862 CXX test/cpp_headers/stdinc.o 00:04:06.862 CXX test/cpp_headers/string.o 00:04:07.120 CXX test/cpp_headers/thread.o 00:04:07.120 CXX test/cpp_headers/trace.o 00:04:07.120 CXX test/cpp_headers/trace_parser.o 00:04:07.120 CXX test/cpp_headers/tree.o 00:04:07.120 CXX test/cpp_headers/ublk.o 00:04:07.120 CXX test/cpp_headers/util.o 00:04:07.120 CXX test/cpp_headers/uuid.o 00:04:07.120 CXX test/cpp_headers/version.o 00:04:07.120 CXX test/cpp_headers/vfio_user_pci.o 00:04:07.120 CXX test/cpp_headers/vfio_user_spec.o 00:04:07.120 CXX test/cpp_headers/vhost.o 00:04:07.120 CXX test/cpp_headers/vmd.o 00:04:07.379 CXX test/cpp_headers/xor.o 00:04:07.379 CXX test/cpp_headers/zipf.o 00:04:07.379 LINK cuse 00:04:07.947 LINK esnap 00:04:08.207 00:04:08.207 real 1m29.599s 00:04:08.207 user 8m31.561s 00:04:08.207 sys 1m32.814s 00:04:08.207 10:25:56 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:08.207 ************************************ 00:04:08.207 10:25:56 make -- common/autotest_common.sh@10 -- $ set +x 00:04:08.207 END TEST make 00:04:08.207 ************************************ 00:04:08.207 10:25:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:08.207 10:25:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:08.207 10:25:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:08.207 10:25:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.207 10:25:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:08.207 10:25:56 -- pm/common@44 -- $ pid=5306 00:04:08.207 10:25:56 -- pm/common@50 -- $ kill -TERM 5306 00:04:08.207 10:25:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.207 10:25:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:08.207 10:25:56 -- pm/common@44 -- $ pid=5308 00:04:08.207 10:25:56 -- pm/common@50 -- $ kill -TERM 5308 00:04:08.207 10:25:56 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:08.207 10:25:56 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:08.467 10:25:56 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:08.467 10:25:56 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:08.467 10:25:56 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:08.467 10:25:57 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:08.467 10:25:57 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.467 10:25:57 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.467 10:25:57 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.467 10:25:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.467 10:25:57 -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.467 10:25:57 -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.467 10:25:57 -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.467 10:25:57 -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.467 10:25:57 -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.467 10:25:57 -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.467 10:25:57 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.467 10:25:57 -- scripts/common.sh@344 -- # case "$op" in 00:04:08.467 10:25:57 -- scripts/common.sh@345 -- # : 1 00:04:08.467 10:25:57 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.467 10:25:57 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.467 10:25:57 -- scripts/common.sh@365 -- # decimal 1 00:04:08.467 10:25:57 -- scripts/common.sh@353 -- # local d=1 00:04:08.467 10:25:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.467 10:25:57 -- scripts/common.sh@355 -- # echo 1 00:04:08.467 10:25:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.467 10:25:57 -- scripts/common.sh@366 -- # decimal 2 00:04:08.467 10:25:57 -- scripts/common.sh@353 -- # local d=2 00:04:08.467 10:25:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.467 10:25:57 -- scripts/common.sh@355 -- # echo 2 00:04:08.467 10:25:57 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.467 10:25:57 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.467 10:25:57 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.467 10:25:57 -- scripts/common.sh@368 -- # return 0 00:04:08.467 10:25:57 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.467 10:25:57 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:08.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.467 --rc genhtml_branch_coverage=1 00:04:08.467 --rc genhtml_function_coverage=1 00:04:08.467 --rc genhtml_legend=1 00:04:08.467 --rc geninfo_all_blocks=1 00:04:08.467 --rc geninfo_unexecuted_blocks=1 00:04:08.467 00:04:08.467 ' 00:04:08.467 10:25:57 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:08.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.467 --rc genhtml_branch_coverage=1 00:04:08.467 --rc genhtml_function_coverage=1 00:04:08.467 --rc genhtml_legend=1 00:04:08.467 --rc geninfo_all_blocks=1 00:04:08.467 --rc geninfo_unexecuted_blocks=1 00:04:08.467 00:04:08.467 ' 00:04:08.467 10:25:57 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:08.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.467 --rc genhtml_branch_coverage=1 00:04:08.467 --rc genhtml_function_coverage=1 00:04:08.467 --rc genhtml_legend=1 00:04:08.467 --rc geninfo_all_blocks=1 00:04:08.467 --rc geninfo_unexecuted_blocks=1 00:04:08.467 00:04:08.467 ' 00:04:08.467 10:25:57 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:08.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.467 --rc genhtml_branch_coverage=1 00:04:08.467 --rc genhtml_function_coverage=1 00:04:08.467 --rc genhtml_legend=1 00:04:08.467 --rc geninfo_all_blocks=1 00:04:08.467 --rc geninfo_unexecuted_blocks=1 00:04:08.467 00:04:08.467 ' 00:04:08.467 10:25:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:08.467 10:25:57 -- nvmf/common.sh@7 -- # uname -s 00:04:08.467 10:25:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:08.467 10:25:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:08.467 10:25:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:08.467 10:25:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:08.467 10:25:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:08.467 10:25:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:08.467 10:25:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:08.467 10:25:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:08.467 10:25:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:08.467 10:25:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:08.467 10:25:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:04:08.467 10:25:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:04:08.467 10:25:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:08.467 10:25:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:08.467 10:25:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:08.467 10:25:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:08.467 10:25:57 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:08.467 10:25:57 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:08.467 10:25:57 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:08.467 10:25:57 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.467 10:25:57 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.467 10:25:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.467 10:25:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.467 10:25:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.467 10:25:57 -- paths/export.sh@5 -- # export PATH 00:04:08.467 10:25:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.467 10:25:57 -- nvmf/common.sh@51 -- # : 0 00:04:08.467 10:25:57 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:08.467 10:25:57 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:08.467 10:25:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:08.467 10:25:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:08.467 10:25:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:08.467 10:25:57 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:08.467 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:08.467 10:25:57 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:08.467 10:25:57 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:08.467 10:25:57 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:08.467 10:25:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:08.467 10:25:57 -- spdk/autotest.sh@32 -- # uname -s 00:04:08.467 10:25:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:08.467 10:25:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:08.467 10:25:57 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:08.467 10:25:57 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:08.467 10:25:57 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:08.467 10:25:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:08.467 10:25:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:08.467 10:25:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:08.467 10:25:57 -- spdk/autotest.sh@48 -- # udevadm_pid=54392 00:04:08.467 10:25:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:08.467 10:25:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:08.467 10:25:57 -- pm/common@17 -- # local monitor 00:04:08.467 10:25:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.467 10:25:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.467 10:25:57 -- pm/common@25 -- # sleep 1 00:04:08.467 10:25:57 -- pm/common@21 -- # date +%s 00:04:08.467 10:25:57 -- pm/common@21 -- # date +%s 00:04:08.467 10:25:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731407157 00:04:08.467 10:25:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731407157 00:04:08.467 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731407157_collect-vmstat.pm.log 00:04:08.467 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731407157_collect-cpu-load.pm.log 00:04:09.404 10:25:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:09.404 10:25:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:09.404 10:25:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.404 10:25:58 -- common/autotest_common.sh@10 -- # set +x 00:04:09.404 10:25:58 -- spdk/autotest.sh@59 -- # create_test_list 00:04:09.404 10:25:58 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:09.404 10:25:58 -- common/autotest_common.sh@10 -- # set +x 00:04:09.663 10:25:58 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:09.663 10:25:58 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:09.663 10:25:58 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:09.663 10:25:58 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:09.663 10:25:58 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:09.663 10:25:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:09.663 10:25:58 -- common/autotest_common.sh@1455 -- # uname 00:04:09.663 10:25:58 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:09.663 10:25:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:09.663 10:25:58 -- common/autotest_common.sh@1475 -- # uname 00:04:09.663 10:25:58 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:09.664 10:25:58 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:09.664 10:25:58 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:09.664 lcov: LCOV version 1.15 00:04:09.664 10:25:58 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:24.545 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:24.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:39.471 10:26:27 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:39.471 10:26:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.471 10:26:27 -- common/autotest_common.sh@10 -- # set +x 00:04:39.471 10:26:27 -- spdk/autotest.sh@78 -- # rm -f 00:04:39.471 10:26:27 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.471 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.471 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:39.471 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:39.471 10:26:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:39.471 10:26:28 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:39.471 10:26:28 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:39.471 10:26:28 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:39.471 10:26:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:39.471 10:26:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:39.471 10:26:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:39.471 10:26:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.471 10:26:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:39.471 10:26:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:39.471 10:26:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:04:39.471 10:26:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:04:39.471 10:26:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:39.471 10:26:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:39.471 10:26:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:39.471 10:26:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:04:39.471 10:26:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:04:39.471 10:26:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:39.471 10:26:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:39.472 10:26:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:39.472 10:26:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:39.472 10:26:28 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:39.472 10:26:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:39.472 10:26:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:39.472 10:26:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:39.472 10:26:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.472 10:26:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:39.472 10:26:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:39.472 10:26:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:39.472 10:26:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:39.472 No valid GPT data, bailing 00:04:39.472 10:26:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.472 10:26:28 -- scripts/common.sh@394 -- # pt= 00:04:39.472 10:26:28 -- scripts/common.sh@395 -- # return 1 00:04:39.472 10:26:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:39.472 1+0 records in 00:04:39.472 1+0 records out 00:04:39.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00398297 s, 263 MB/s 00:04:39.472 10:26:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.472 10:26:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:39.472 10:26:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:04:39.472 10:26:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:04:39.472 10:26:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:04:39.731 No valid GPT data, bailing 00:04:39.731 10:26:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:39.731 10:26:28 -- scripts/common.sh@394 -- # pt= 00:04:39.731 10:26:28 -- scripts/common.sh@395 -- # return 1 00:04:39.731 10:26:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:04:39.731 1+0 records in 00:04:39.731 1+0 records out 00:04:39.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00410581 s, 255 MB/s 00:04:39.731 10:26:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.731 10:26:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:39.731 10:26:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:04:39.731 10:26:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:04:39.731 10:26:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:04:39.731 No valid GPT data, bailing 00:04:39.731 10:26:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:39.731 10:26:28 -- scripts/common.sh@394 -- # pt= 00:04:39.731 10:26:28 -- scripts/common.sh@395 -- # return 1 00:04:39.731 10:26:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:04:39.731 1+0 records in 00:04:39.731 1+0 records out 00:04:39.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405189 s, 259 MB/s 00:04:39.731 10:26:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.731 10:26:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:39.731 10:26:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:39.731 10:26:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:39.731 10:26:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:39.731 No valid GPT data, bailing 00:04:39.731 10:26:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:39.731 10:26:28 -- scripts/common.sh@394 -- # pt= 00:04:39.731 10:26:28 -- scripts/common.sh@395 -- # return 1 00:04:39.731 10:26:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:39.731 1+0 records in 00:04:39.731 1+0 records out 00:04:39.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00372221 s, 282 MB/s 00:04:39.731 10:26:28 -- spdk/autotest.sh@105 -- # sync 00:04:39.991 10:26:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:39.991 10:26:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:39.991 10:26:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:41.895 10:26:30 -- spdk/autotest.sh@111 -- # uname -s 00:04:41.895 10:26:30 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:41.895 10:26:30 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:41.895 10:26:30 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:42.463 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.723 Hugepages 00:04:42.723 node hugesize free / total 00:04:42.723 node0 1048576kB 0 / 0 00:04:42.723 node0 2048kB 0 / 0 00:04:42.723 00:04:42.723 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.723 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:42.723 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:42.723 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:42.723 10:26:31 -- spdk/autotest.sh@117 -- # uname -s 00:04:42.723 10:26:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:42.723 10:26:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:42.723 10:26:31 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.661 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.661 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.661 10:26:32 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:45.038 10:26:33 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:45.038 10:26:33 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:45.038 10:26:33 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:45.038 10:26:33 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:45.038 10:26:33 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:45.038 10:26:33 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:45.038 10:26:33 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.038 10:26:33 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:45.038 10:26:33 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:45.038 10:26:33 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:45.038 10:26:33 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:45.038 10:26:33 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.038 Waiting for block devices as requested 00:04:45.297 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:45.297 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:45.297 10:26:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:45.297 10:26:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:45.297 10:26:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:45.297 10:26:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:45.297 10:26:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:45.297 10:26:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:45.297 10:26:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:45.297 10:26:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:45.297 10:26:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:45.297 10:26:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:45.297 10:26:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:45.297 10:26:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:45.297 10:26:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:45.297 10:26:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:45.297 10:26:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:45.297 10:26:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:45.297 10:26:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:45.297 10:26:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:45.297 10:26:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:45.297 10:26:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:45.297 10:26:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:45.297 10:26:34 -- common/autotest_common.sh@1541 -- # continue 00:04:45.298 10:26:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:45.298 10:26:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:45.298 10:26:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:45.298 10:26:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:45.298 10:26:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:45.298 10:26:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:45.298 10:26:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:45.298 10:26:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:45.298 10:26:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:45.298 10:26:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:45.298 10:26:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:45.298 10:26:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:45.298 10:26:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:45.556 10:26:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:45.557 10:26:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:45.557 10:26:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:45.557 10:26:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:45.557 10:26:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:45.557 10:26:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:45.557 10:26:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:45.557 10:26:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:45.557 10:26:34 -- common/autotest_common.sh@1541 -- # continue 00:04:45.557 10:26:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:45.557 10:26:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:45.557 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:04:45.557 10:26:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:45.557 10:26:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.557 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:04:45.557 10:26:34 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.125 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.384 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.384 10:26:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:46.384 10:26:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.384 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:04:46.384 10:26:34 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:46.384 10:26:34 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:46.384 10:26:34 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:46.384 10:26:34 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:46.384 10:26:34 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:46.384 10:26:34 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:46.384 10:26:34 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:46.384 10:26:34 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:46.384 10:26:34 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:46.384 10:26:34 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:46.384 10:26:34 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.384 10:26:34 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:46.384 10:26:35 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:46.384 10:26:35 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:46.384 10:26:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:46.384 10:26:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:46.384 10:26:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:46.384 10:26:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:46.384 10:26:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.384 10:26:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:46.384 10:26:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:46.384 10:26:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:46.384 10:26:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.384 10:26:35 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:46.384 10:26:35 -- common/autotest_common.sh@1570 -- # return 0 00:04:46.384 10:26:35 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:46.384 10:26:35 -- common/autotest_common.sh@1578 -- # return 0 00:04:46.384 10:26:35 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:46.384 10:26:35 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:46.384 10:26:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:46.384 10:26:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:46.384 10:26:35 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:46.384 10:26:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.384 10:26:35 -- common/autotest_common.sh@10 -- # set +x 00:04:46.384 10:26:35 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:46.384 10:26:35 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:46.384 10:26:35 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:46.384 10:26:35 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.384 10:26:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.384 10:26:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.384 10:26:35 -- common/autotest_common.sh@10 -- # set +x 00:04:46.384 ************************************ 00:04:46.384 START TEST env 00:04:46.384 ************************************ 00:04:46.384 10:26:35 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.644 * Looking for test storage... 00:04:46.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:46.644 10:26:35 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:46.644 10:26:35 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:46.644 10:26:35 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:46.644 10:26:35 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:46.644 10:26:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.644 10:26:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.644 10:26:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.644 10:26:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.644 10:26:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.644 10:26:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.644 10:26:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.644 10:26:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.644 10:26:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.644 10:26:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.644 10:26:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.644 10:26:35 env -- scripts/common.sh@344 -- # case "$op" in 00:04:46.644 10:26:35 env -- scripts/common.sh@345 -- # : 1 00:04:46.644 10:26:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.644 10:26:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.644 10:26:35 env -- scripts/common.sh@365 -- # decimal 1 00:04:46.644 10:26:35 env -- scripts/common.sh@353 -- # local d=1 00:04:46.644 10:26:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.644 10:26:35 env -- scripts/common.sh@355 -- # echo 1 00:04:46.644 10:26:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.644 10:26:35 env -- scripts/common.sh@366 -- # decimal 2 00:04:46.644 10:26:35 env -- scripts/common.sh@353 -- # local d=2 00:04:46.644 10:26:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.644 10:26:35 env -- scripts/common.sh@355 -- # echo 2 00:04:46.644 10:26:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.644 10:26:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.644 10:26:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.644 10:26:35 env -- scripts/common.sh@368 -- # return 0 00:04:46.644 10:26:35 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.644 10:26:35 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:46.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.644 --rc genhtml_branch_coverage=1 00:04:46.644 --rc genhtml_function_coverage=1 00:04:46.644 --rc genhtml_legend=1 00:04:46.644 --rc geninfo_all_blocks=1 00:04:46.644 --rc geninfo_unexecuted_blocks=1 00:04:46.644 00:04:46.644 ' 00:04:46.644 10:26:35 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:46.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.644 --rc genhtml_branch_coverage=1 00:04:46.644 --rc genhtml_function_coverage=1 00:04:46.644 --rc genhtml_legend=1 00:04:46.644 --rc geninfo_all_blocks=1 00:04:46.644 --rc geninfo_unexecuted_blocks=1 00:04:46.644 00:04:46.644 ' 00:04:46.644 10:26:35 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:46.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.644 --rc genhtml_branch_coverage=1 00:04:46.644 --rc genhtml_function_coverage=1 00:04:46.644 --rc genhtml_legend=1 00:04:46.644 --rc geninfo_all_blocks=1 00:04:46.644 --rc geninfo_unexecuted_blocks=1 00:04:46.644 00:04:46.644 ' 00:04:46.644 10:26:35 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:46.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.644 --rc genhtml_branch_coverage=1 00:04:46.644 --rc genhtml_function_coverage=1 00:04:46.644 --rc genhtml_legend=1 00:04:46.644 --rc geninfo_all_blocks=1 00:04:46.644 --rc geninfo_unexecuted_blocks=1 00:04:46.644 00:04:46.644 ' 00:04:46.644 10:26:35 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.644 10:26:35 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.644 10:26:35 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.644 10:26:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.644 ************************************ 00:04:46.644 START TEST env_memory 00:04:46.644 ************************************ 00:04:46.644 10:26:35 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.644 00:04:46.644 00:04:46.644 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.644 http://cunit.sourceforge.net/ 00:04:46.644 00:04:46.644 00:04:46.644 Suite: memory 00:04:46.644 Test: alloc and free memory map ...[2024-11-12 10:26:35.328799] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:46.644 passed 00:04:46.644 Test: mem map translation ...[2024-11-12 10:26:35.360408] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:46.644 [2024-11-12 10:26:35.360451] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:46.644 [2024-11-12 10:26:35.360513] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:46.644 [2024-11-12 10:26:35.360535] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:46.903 passed 00:04:46.903 Test: mem map registration ...[2024-11-12 10:26:35.424417] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:46.903 [2024-11-12 10:26:35.424453] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:46.903 passed 00:04:46.903 Test: mem map adjacent registrations ...passed 00:04:46.903 00:04:46.903 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.903 suites 1 1 n/a 0 0 00:04:46.904 tests 4 4 4 0 0 00:04:46.904 asserts 152 152 152 0 n/a 00:04:46.904 00:04:46.904 Elapsed time = 0.213 seconds 00:04:46.904 00:04:46.904 real 0m0.231s 00:04:46.904 user 0m0.216s 00:04:46.904 sys 0m0.010s 00:04:46.904 10:26:35 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.904 10:26:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:46.904 ************************************ 00:04:46.904 END TEST env_memory 00:04:46.904 ************************************ 00:04:46.904 10:26:35 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.904 10:26:35 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.904 10:26:35 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.904 10:26:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.904 ************************************ 00:04:46.904 START TEST env_vtophys 00:04:46.904 ************************************ 00:04:46.904 10:26:35 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.904 EAL: lib.eal log level changed from notice to debug 00:04:46.904 EAL: Detected lcore 0 as core 0 on socket 0 00:04:46.904 EAL: Detected lcore 1 as core 0 on socket 0 00:04:46.904 EAL: Detected lcore 2 as core 0 on socket 0 00:04:46.904 EAL: Detected lcore 3 as core 0 on socket 0 00:04:46.904 EAL: Detected lcore 4 as core 0 on socket 0 00:04:46.904 EAL: Detected lcore 5 as core 0 on socket 0 00:04:46.904 EAL: Detected lcore 6 as core 0 on socket 0 00:04:46.904 EAL: Detected lcore 7 as core 0 on socket 0 00:04:46.904 EAL: Detected lcore 8 as core 0 on socket 0 00:04:46.904 EAL: Detected lcore 9 as core 0 on socket 0 00:04:46.904 EAL: Maximum logical cores by configuration: 128 00:04:46.904 EAL: Detected CPU lcores: 10 00:04:46.904 EAL: Detected NUMA nodes: 1 00:04:46.904 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:46.904 EAL: Detected shared linkage of DPDK 00:04:46.904 EAL: No shared files mode enabled, IPC will be disabled 00:04:46.904 EAL: Selected IOVA mode 'PA' 00:04:46.904 EAL: Probing VFIO support... 00:04:46.904 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:46.904 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:46.904 EAL: Ask a virtual area of 0x2e000 bytes 00:04:46.904 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:46.904 EAL: Setting up physically contiguous memory... 00:04:46.904 EAL: Setting maximum number of open files to 524288 00:04:46.904 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:46.904 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:46.904 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.904 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:46.904 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.904 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.904 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:46.904 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:46.904 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.904 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:46.904 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.904 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.904 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:46.904 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:46.904 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.904 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:46.904 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.904 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.904 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:46.904 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:46.904 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.904 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:46.904 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.904 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.904 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:46.904 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:46.904 EAL: Hugepages will be freed exactly as allocated. 00:04:46.904 EAL: No shared files mode enabled, IPC is disabled 00:04:46.904 EAL: No shared files mode enabled, IPC is disabled 00:04:47.163 EAL: TSC frequency is ~2200000 KHz 00:04:47.163 EAL: Main lcore 0 is ready (tid=7fe44028ca00;cpuset=[0]) 00:04:47.163 EAL: Trying to obtain current memory policy. 00:04:47.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.163 EAL: Restoring previous memory policy: 0 00:04:47.163 EAL: request: mp_malloc_sync 00:04:47.163 EAL: No shared files mode enabled, IPC is disabled 00:04:47.163 EAL: Heap on socket 0 was expanded by 2MB 00:04:47.163 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.163 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:47.163 EAL: Mem event callback 'spdk:(nil)' registered 00:04:47.163 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:47.163 00:04:47.163 00:04:47.163 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.163 http://cunit.sourceforge.net/ 00:04:47.163 00:04:47.163 00:04:47.163 Suite: components_suite 00:04:47.163 Test: vtophys_malloc_test ...passed 00:04:47.163 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:47.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.164 EAL: Restoring previous memory policy: 4 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was expanded by 4MB 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was shrunk by 4MB 00:04:47.164 EAL: Trying to obtain current memory policy. 00:04:47.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.164 EAL: Restoring previous memory policy: 4 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was expanded by 6MB 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was shrunk by 6MB 00:04:47.164 EAL: Trying to obtain current memory policy. 00:04:47.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.164 EAL: Restoring previous memory policy: 4 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was expanded by 10MB 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was shrunk by 10MB 00:04:47.164 EAL: Trying to obtain current memory policy. 00:04:47.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.164 EAL: Restoring previous memory policy: 4 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was expanded by 18MB 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was shrunk by 18MB 00:04:47.164 EAL: Trying to obtain current memory policy. 00:04:47.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.164 EAL: Restoring previous memory policy: 4 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was expanded by 34MB 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was shrunk by 34MB 00:04:47.164 EAL: Trying to obtain current memory policy. 00:04:47.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.164 EAL: Restoring previous memory policy: 4 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was expanded by 66MB 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was shrunk by 66MB 00:04:47.164 EAL: Trying to obtain current memory policy. 00:04:47.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.164 EAL: Restoring previous memory policy: 4 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was expanded by 130MB 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was shrunk by 130MB 00:04:47.164 EAL: Trying to obtain current memory policy. 00:04:47.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.164 EAL: Restoring previous memory policy: 4 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was expanded by 258MB 00:04:47.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.164 EAL: request: mp_malloc_sync 00:04:47.164 EAL: No shared files mode enabled, IPC is disabled 00:04:47.164 EAL: Heap on socket 0 was shrunk by 258MB 00:04:47.164 EAL: Trying to obtain current memory policy. 00:04:47.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.424 EAL: Restoring previous memory policy: 4 00:04:47.424 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.424 EAL: request: mp_malloc_sync 00:04:47.424 EAL: No shared files mode enabled, IPC is disabled 00:04:47.424 EAL: Heap on socket 0 was expanded by 514MB 00:04:47.424 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.424 EAL: request: mp_malloc_sync 00:04:47.424 EAL: No shared files mode enabled, IPC is disabled 00:04:47.424 EAL: Heap on socket 0 was shrunk by 514MB 00:04:47.424 EAL: Trying to obtain current memory policy. 00:04:47.424 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.683 EAL: Restoring previous memory policy: 4 00:04:47.683 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.683 EAL: request: mp_malloc_sync 00:04:47.683 EAL: No shared files mode enabled, IPC is disabled 00:04:47.683 EAL: Heap on socket 0 was expanded by 1026MB 00:04:47.683 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.943 passed 00:04:47.943 00:04:47.943 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.943 suites 1 1 n/a 0 0 00:04:47.943 tests 2 2 2 0 0 00:04:47.943 asserts 5428 5428 5428 0 n/a 00:04:47.943 00:04:47.943 Elapsed time = 0.695 seconds 00:04:47.943 EAL: request: mp_malloc_sync 00:04:47.943 EAL: No shared files mode enabled, IPC is disabled 00:04:47.943 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:47.943 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.943 EAL: request: mp_malloc_sync 00:04:47.943 EAL: No shared files mode enabled, IPC is disabled 00:04:47.943 EAL: Heap on socket 0 was shrunk by 2MB 00:04:47.943 EAL: No shared files mode enabled, IPC is disabled 00:04:47.943 EAL: No shared files mode enabled, IPC is disabled 00:04:47.943 EAL: No shared files mode enabled, IPC is disabled 00:04:47.943 00:04:47.943 real 0m0.903s 00:04:47.943 user 0m0.467s 00:04:47.943 sys 0m0.305s 00:04:47.943 10:26:36 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.943 10:26:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:47.943 ************************************ 00:04:47.943 END TEST env_vtophys 00:04:47.943 ************************************ 00:04:47.943 10:26:36 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:47.943 10:26:36 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.943 10:26:36 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.943 10:26:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.943 ************************************ 00:04:47.943 START TEST env_pci 00:04:47.943 ************************************ 00:04:47.943 10:26:36 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:47.943 00:04:47.943 00:04:47.943 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.943 http://cunit.sourceforge.net/ 00:04:47.943 00:04:47.943 00:04:47.943 Suite: pci 00:04:47.943 Test: pci_hook ...[2024-11-12 10:26:36.530634] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56599 has claimed it 00:04:47.943 passed 00:04:47.943 00:04:47.943 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.943 suites 1 1 n/a 0 0 00:04:47.943 tests 1 1 1 0 0 00:04:47.943 asserts 25 25 25 0 n/a 00:04:47.943 00:04:47.943 Elapsed time = 0.002 seconds 00:04:47.943 EAL: Cannot find device (10000:00:01.0) 00:04:47.943 EAL: Failed to attach device on primary process 00:04:47.943 00:04:47.943 real 0m0.018s 00:04:47.943 user 0m0.006s 00:04:47.943 sys 0m0.012s 00:04:47.943 10:26:36 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.943 10:26:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:47.943 ************************************ 00:04:47.943 END TEST env_pci 00:04:47.943 ************************************ 00:04:47.943 10:26:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:47.943 10:26:36 env -- env/env.sh@15 -- # uname 00:04:47.943 10:26:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:47.943 10:26:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:47.943 10:26:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.943 10:26:36 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:47.943 10:26:36 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.943 10:26:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.943 ************************************ 00:04:47.943 START TEST env_dpdk_post_init 00:04:47.943 ************************************ 00:04:47.943 10:26:36 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.943 EAL: Detected CPU lcores: 10 00:04:47.943 EAL: Detected NUMA nodes: 1 00:04:47.943 EAL: Detected shared linkage of DPDK 00:04:47.943 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:47.943 EAL: Selected IOVA mode 'PA' 00:04:48.203 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:48.203 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:48.203 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:48.203 Starting DPDK initialization... 00:04:48.203 Starting SPDK post initialization... 00:04:48.203 SPDK NVMe probe 00:04:48.203 Attaching to 0000:00:10.0 00:04:48.203 Attaching to 0000:00:11.0 00:04:48.203 Attached to 0000:00:10.0 00:04:48.203 Attached to 0000:00:11.0 00:04:48.203 Cleaning up... 00:04:48.203 00:04:48.203 real 0m0.186s 00:04:48.203 user 0m0.056s 00:04:48.203 sys 0m0.030s 00:04:48.203 10:26:36 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.203 10:26:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.203 ************************************ 00:04:48.203 END TEST env_dpdk_post_init 00:04:48.203 ************************************ 00:04:48.203 10:26:36 env -- env/env.sh@26 -- # uname 00:04:48.203 10:26:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:48.203 10:26:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.203 10:26:36 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.203 10:26:36 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.203 10:26:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.203 ************************************ 00:04:48.203 START TEST env_mem_callbacks 00:04:48.203 ************************************ 00:04:48.203 10:26:36 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.203 EAL: Detected CPU lcores: 10 00:04:48.203 EAL: Detected NUMA nodes: 1 00:04:48.203 EAL: Detected shared linkage of DPDK 00:04:48.203 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:48.203 EAL: Selected IOVA mode 'PA' 00:04:48.203 00:04:48.203 00:04:48.203 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.203 http://cunit.sourceforge.net/ 00:04:48.203 00:04:48.203 00:04:48.203 Suite: memory 00:04:48.203 Test: test ... 00:04:48.203 register 0x200000200000 2097152 00:04:48.203 malloc 3145728 00:04:48.203 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:48.203 register 0x200000400000 4194304 00:04:48.203 buf 0x200000500000 len 3145728 PASSED 00:04:48.203 malloc 64 00:04:48.203 buf 0x2000004fff40 len 64 PASSED 00:04:48.203 malloc 4194304 00:04:48.203 register 0x200000800000 6291456 00:04:48.203 buf 0x200000a00000 len 4194304 PASSED 00:04:48.203 free 0x200000500000 3145728 00:04:48.203 free 0x2000004fff40 64 00:04:48.203 unregister 0x200000400000 4194304 PASSED 00:04:48.462 free 0x200000a00000 4194304 00:04:48.462 unregister 0x200000800000 6291456 PASSED 00:04:48.462 malloc 8388608 00:04:48.462 register 0x200000400000 10485760 00:04:48.462 buf 0x200000600000 len 8388608 PASSED 00:04:48.462 free 0x200000600000 8388608 00:04:48.462 unregister 0x200000400000 10485760 PASSED 00:04:48.462 passed 00:04:48.462 00:04:48.462 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.462 suites 1 1 n/a 0 0 00:04:48.462 tests 1 1 1 0 0 00:04:48.462 asserts 15 15 15 0 n/a 00:04:48.462 00:04:48.462 Elapsed time = 0.005 seconds 00:04:48.462 00:04:48.462 real 0m0.131s 00:04:48.462 user 0m0.009s 00:04:48.462 sys 0m0.021s 00:04:48.462 10:26:36 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.462 10:26:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:48.462 ************************************ 00:04:48.462 END TEST env_mem_callbacks 00:04:48.462 ************************************ 00:04:48.462 00:04:48.462 real 0m1.925s 00:04:48.462 user 0m0.967s 00:04:48.462 sys 0m0.604s 00:04:48.462 10:26:37 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.462 10:26:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.462 ************************************ 00:04:48.462 END TEST env 00:04:48.462 ************************************ 00:04:48.462 10:26:37 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:48.462 10:26:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.462 10:26:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.462 10:26:37 -- common/autotest_common.sh@10 -- # set +x 00:04:48.462 ************************************ 00:04:48.462 START TEST rpc 00:04:48.462 ************************************ 00:04:48.462 10:26:37 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:48.462 * Looking for test storage... 00:04:48.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.462 10:26:37 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:48.462 10:26:37 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:48.462 10:26:37 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:48.462 10:26:37 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:48.462 10:26:37 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.721 10:26:37 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.721 10:26:37 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.721 10:26:37 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.721 10:26:37 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.721 10:26:37 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.721 10:26:37 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.722 10:26:37 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.722 10:26:37 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.722 10:26:37 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.722 10:26:37 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.722 10:26:37 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.722 10:26:37 rpc -- scripts/common.sh@345 -- # : 1 00:04:48.722 10:26:37 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.722 10:26:37 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.722 10:26:37 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.722 10:26:37 rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.722 10:26:37 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.722 10:26:37 rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.722 10:26:37 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.722 10:26:37 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.722 10:26:37 rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.722 10:26:37 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.722 10:26:37 rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.722 10:26:37 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.722 10:26:37 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.722 10:26:37 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.722 10:26:37 rpc -- scripts/common.sh@368 -- # return 0 00:04:48.722 10:26:37 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.722 10:26:37 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:48.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.722 --rc genhtml_branch_coverage=1 00:04:48.722 --rc genhtml_function_coverage=1 00:04:48.722 --rc genhtml_legend=1 00:04:48.722 --rc geninfo_all_blocks=1 00:04:48.722 --rc geninfo_unexecuted_blocks=1 00:04:48.722 00:04:48.722 ' 00:04:48.722 10:26:37 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:48.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.722 --rc genhtml_branch_coverage=1 00:04:48.722 --rc genhtml_function_coverage=1 00:04:48.722 --rc genhtml_legend=1 00:04:48.722 --rc geninfo_all_blocks=1 00:04:48.722 --rc geninfo_unexecuted_blocks=1 00:04:48.722 00:04:48.722 ' 00:04:48.722 10:26:37 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:48.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.722 --rc genhtml_branch_coverage=1 00:04:48.722 --rc genhtml_function_coverage=1 00:04:48.722 --rc genhtml_legend=1 00:04:48.722 --rc geninfo_all_blocks=1 00:04:48.722 --rc geninfo_unexecuted_blocks=1 00:04:48.722 00:04:48.722 ' 00:04:48.722 10:26:37 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:48.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.722 --rc genhtml_branch_coverage=1 00:04:48.722 --rc genhtml_function_coverage=1 00:04:48.722 --rc genhtml_legend=1 00:04:48.722 --rc geninfo_all_blocks=1 00:04:48.722 --rc geninfo_unexecuted_blocks=1 00:04:48.722 00:04:48.722 ' 00:04:48.722 10:26:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56717 00:04:48.722 10:26:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.722 10:26:37 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:48.722 10:26:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56717 00:04:48.722 10:26:37 rpc -- common/autotest_common.sh@833 -- # '[' -z 56717 ']' 00:04:48.722 10:26:37 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.722 10:26:37 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.722 10:26:37 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.722 10:26:37 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.722 10:26:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.722 [2024-11-12 10:26:37.306454] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:04:48.722 [2024-11-12 10:26:37.306567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56717 ] 00:04:48.722 [2024-11-12 10:26:37.450391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.981 [2024-11-12 10:26:37.480219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:48.981 [2024-11-12 10:26:37.480279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56717' to capture a snapshot of events at runtime. 00:04:48.981 [2024-11-12 10:26:37.480304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:48.981 [2024-11-12 10:26:37.480312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:48.981 [2024-11-12 10:26:37.480318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56717 for offline analysis/debug. 00:04:48.981 [2024-11-12 10:26:37.480698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.981 [2024-11-12 10:26:37.520273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:48.981 10:26:37 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.981 10:26:37 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:48.981 10:26:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.981 10:26:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.981 10:26:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:48.981 10:26:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:48.981 10:26:37 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.981 10:26:37 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.981 10:26:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.982 ************************************ 00:04:48.982 START TEST rpc_integrity 00:04:48.982 ************************************ 00:04:48.982 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:48.982 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:48.982 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.982 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.982 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.982 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:48.982 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:48.982 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:48.982 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:48.982 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.982 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.241 { 00:04:49.241 "name": "Malloc0", 00:04:49.241 "aliases": [ 00:04:49.241 "434edfb6-ac25-428a-afe5-624651cc4fd8" 00:04:49.241 ], 00:04:49.241 "product_name": "Malloc disk", 00:04:49.241 "block_size": 512, 00:04:49.241 "num_blocks": 16384, 00:04:49.241 "uuid": "434edfb6-ac25-428a-afe5-624651cc4fd8", 00:04:49.241 "assigned_rate_limits": { 00:04:49.241 "rw_ios_per_sec": 0, 00:04:49.241 "rw_mbytes_per_sec": 0, 00:04:49.241 "r_mbytes_per_sec": 0, 00:04:49.241 "w_mbytes_per_sec": 0 00:04:49.241 }, 00:04:49.241 "claimed": false, 00:04:49.241 "zoned": false, 00:04:49.241 "supported_io_types": { 00:04:49.241 "read": true, 00:04:49.241 "write": true, 00:04:49.241 "unmap": true, 00:04:49.241 "flush": true, 00:04:49.241 "reset": true, 00:04:49.241 "nvme_admin": false, 00:04:49.241 "nvme_io": false, 00:04:49.241 "nvme_io_md": false, 00:04:49.241 "write_zeroes": true, 00:04:49.241 "zcopy": true, 00:04:49.241 "get_zone_info": false, 00:04:49.241 "zone_management": false, 00:04:49.241 "zone_append": false, 00:04:49.241 "compare": false, 00:04:49.241 "compare_and_write": false, 00:04:49.241 "abort": true, 00:04:49.241 "seek_hole": false, 00:04:49.241 "seek_data": false, 00:04:49.241 "copy": true, 00:04:49.241 "nvme_iov_md": false 00:04:49.241 }, 00:04:49.241 "memory_domains": [ 00:04:49.241 { 00:04:49.241 "dma_device_id": "system", 00:04:49.241 "dma_device_type": 1 00:04:49.241 }, 00:04:49.241 { 00:04:49.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.241 "dma_device_type": 2 00:04:49.241 } 00:04:49.241 ], 00:04:49.241 "driver_specific": {} 00:04:49.241 } 00:04:49.241 ]' 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.241 [2024-11-12 10:26:37.819963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:49.241 [2024-11-12 10:26:37.820033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.241 [2024-11-12 10:26:37.820052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d9cc30 00:04:49.241 [2024-11-12 10:26:37.820073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.241 [2024-11-12 10:26:37.821755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.241 [2024-11-12 10:26:37.821800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.241 Passthru0 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.241 { 00:04:49.241 "name": "Malloc0", 00:04:49.241 "aliases": [ 00:04:49.241 "434edfb6-ac25-428a-afe5-624651cc4fd8" 00:04:49.241 ], 00:04:49.241 "product_name": "Malloc disk", 00:04:49.241 "block_size": 512, 00:04:49.241 "num_blocks": 16384, 00:04:49.241 "uuid": "434edfb6-ac25-428a-afe5-624651cc4fd8", 00:04:49.241 "assigned_rate_limits": { 00:04:49.241 "rw_ios_per_sec": 0, 00:04:49.241 "rw_mbytes_per_sec": 0, 00:04:49.241 "r_mbytes_per_sec": 0, 00:04:49.241 "w_mbytes_per_sec": 0 00:04:49.241 }, 00:04:49.241 "claimed": true, 00:04:49.241 "claim_type": "exclusive_write", 00:04:49.241 "zoned": false, 00:04:49.241 "supported_io_types": { 00:04:49.241 "read": true, 00:04:49.241 "write": true, 00:04:49.241 "unmap": true, 00:04:49.241 "flush": true, 00:04:49.241 "reset": true, 00:04:49.241 "nvme_admin": false, 00:04:49.241 "nvme_io": false, 00:04:49.241 "nvme_io_md": false, 00:04:49.241 "write_zeroes": true, 00:04:49.241 "zcopy": true, 00:04:49.241 "get_zone_info": false, 00:04:49.241 "zone_management": false, 00:04:49.241 "zone_append": false, 00:04:49.241 "compare": false, 00:04:49.241 "compare_and_write": false, 00:04:49.241 "abort": true, 00:04:49.241 "seek_hole": false, 00:04:49.241 "seek_data": false, 00:04:49.241 "copy": true, 00:04:49.241 "nvme_iov_md": false 00:04:49.241 }, 00:04:49.241 "memory_domains": [ 00:04:49.241 { 00:04:49.241 "dma_device_id": "system", 00:04:49.241 "dma_device_type": 1 00:04:49.241 }, 00:04:49.241 { 00:04:49.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.241 "dma_device_type": 2 00:04:49.241 } 00:04:49.241 ], 00:04:49.241 "driver_specific": {} 00:04:49.241 }, 00:04:49.241 { 00:04:49.241 "name": "Passthru0", 00:04:49.241 "aliases": [ 00:04:49.241 "7d547cf1-e5ef-5f05-a037-52b69c517ccb" 00:04:49.241 ], 00:04:49.241 "product_name": "passthru", 00:04:49.241 "block_size": 512, 00:04:49.241 "num_blocks": 16384, 00:04:49.241 "uuid": "7d547cf1-e5ef-5f05-a037-52b69c517ccb", 00:04:49.241 "assigned_rate_limits": { 00:04:49.241 "rw_ios_per_sec": 0, 00:04:49.241 "rw_mbytes_per_sec": 0, 00:04:49.241 "r_mbytes_per_sec": 0, 00:04:49.241 "w_mbytes_per_sec": 0 00:04:49.241 }, 00:04:49.241 "claimed": false, 00:04:49.241 "zoned": false, 00:04:49.241 "supported_io_types": { 00:04:49.241 "read": true, 00:04:49.241 "write": true, 00:04:49.241 "unmap": true, 00:04:49.241 "flush": true, 00:04:49.241 "reset": true, 00:04:49.241 "nvme_admin": false, 00:04:49.241 "nvme_io": false, 00:04:49.241 "nvme_io_md": false, 00:04:49.241 "write_zeroes": true, 00:04:49.241 "zcopy": true, 00:04:49.241 "get_zone_info": false, 00:04:49.241 "zone_management": false, 00:04:49.241 "zone_append": false, 00:04:49.241 "compare": false, 00:04:49.241 "compare_and_write": false, 00:04:49.241 "abort": true, 00:04:49.241 "seek_hole": false, 00:04:49.241 "seek_data": false, 00:04:49.241 "copy": true, 00:04:49.241 "nvme_iov_md": false 00:04:49.241 }, 00:04:49.241 "memory_domains": [ 00:04:49.241 { 00:04:49.241 "dma_device_id": "system", 00:04:49.241 "dma_device_type": 1 00:04:49.241 }, 00:04:49.241 { 00:04:49.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.241 "dma_device_type": 2 00:04:49.241 } 00:04:49.241 ], 00:04:49.241 "driver_specific": { 00:04:49.241 "passthru": { 00:04:49.241 "name": "Passthru0", 00:04:49.241 "base_bdev_name": "Malloc0" 00:04:49.241 } 00:04:49.241 } 00:04:49.241 } 00:04:49.241 ]' 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:49.241 10:26:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.241 00:04:49.241 real 0m0.330s 00:04:49.241 user 0m0.212s 00:04:49.241 sys 0m0.045s 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.241 10:26:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.241 ************************************ 00:04:49.241 END TEST rpc_integrity 00:04:49.241 ************************************ 00:04:49.528 10:26:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:49.528 10:26:38 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.528 10:26:38 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.528 10:26:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.528 ************************************ 00:04:49.528 START TEST rpc_plugins 00:04:49.528 ************************************ 00:04:49.528 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:49.528 10:26:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:49.528 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.528 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.528 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.528 10:26:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:49.528 10:26:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:49.528 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.528 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.528 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.528 10:26:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:49.528 { 00:04:49.528 "name": "Malloc1", 00:04:49.528 "aliases": [ 00:04:49.528 "30f5fa47-97e5-40c0-9f13-db93051dce17" 00:04:49.528 ], 00:04:49.528 "product_name": "Malloc disk", 00:04:49.528 "block_size": 4096, 00:04:49.528 "num_blocks": 256, 00:04:49.528 "uuid": "30f5fa47-97e5-40c0-9f13-db93051dce17", 00:04:49.528 "assigned_rate_limits": { 00:04:49.528 "rw_ios_per_sec": 0, 00:04:49.528 "rw_mbytes_per_sec": 0, 00:04:49.528 "r_mbytes_per_sec": 0, 00:04:49.528 "w_mbytes_per_sec": 0 00:04:49.528 }, 00:04:49.528 "claimed": false, 00:04:49.528 "zoned": false, 00:04:49.528 "supported_io_types": { 00:04:49.528 "read": true, 00:04:49.528 "write": true, 00:04:49.528 "unmap": true, 00:04:49.528 "flush": true, 00:04:49.528 "reset": true, 00:04:49.528 "nvme_admin": false, 00:04:49.528 "nvme_io": false, 00:04:49.528 "nvme_io_md": false, 00:04:49.528 "write_zeroes": true, 00:04:49.528 "zcopy": true, 00:04:49.528 "get_zone_info": false, 00:04:49.528 "zone_management": false, 00:04:49.528 "zone_append": false, 00:04:49.528 "compare": false, 00:04:49.528 "compare_and_write": false, 00:04:49.528 "abort": true, 00:04:49.528 "seek_hole": false, 00:04:49.528 "seek_data": false, 00:04:49.528 "copy": true, 00:04:49.528 "nvme_iov_md": false 00:04:49.528 }, 00:04:49.528 "memory_domains": [ 00:04:49.528 { 00:04:49.528 "dma_device_id": "system", 00:04:49.528 "dma_device_type": 1 00:04:49.528 }, 00:04:49.528 { 00:04:49.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.528 "dma_device_type": 2 00:04:49.528 } 00:04:49.528 ], 00:04:49.528 "driver_specific": {} 00:04:49.528 } 00:04:49.528 ]' 00:04:49.528 10:26:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:49.528 10:26:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:49.528 10:26:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:49.528 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.529 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.529 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.529 10:26:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:49.529 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.529 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.529 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.529 10:26:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:49.529 10:26:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:49.529 10:26:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:49.529 00:04:49.529 real 0m0.149s 00:04:49.529 user 0m0.097s 00:04:49.529 sys 0m0.017s 00:04:49.529 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.529 10:26:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.529 ************************************ 00:04:49.529 END TEST rpc_plugins 00:04:49.529 ************************************ 00:04:49.529 10:26:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:49.529 10:26:38 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.529 10:26:38 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.529 10:26:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.529 ************************************ 00:04:49.529 START TEST rpc_trace_cmd_test 00:04:49.529 ************************************ 00:04:49.529 10:26:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:49.529 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:49.529 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:49.529 10:26:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.529 10:26:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:49.529 10:26:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.529 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:49.529 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56717", 00:04:49.529 "tpoint_group_mask": "0x8", 00:04:49.529 "iscsi_conn": { 00:04:49.529 "mask": "0x2", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "scsi": { 00:04:49.529 "mask": "0x4", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "bdev": { 00:04:49.529 "mask": "0x8", 00:04:49.529 "tpoint_mask": "0xffffffffffffffff" 00:04:49.529 }, 00:04:49.529 "nvmf_rdma": { 00:04:49.529 "mask": "0x10", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "nvmf_tcp": { 00:04:49.529 "mask": "0x20", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "ftl": { 00:04:49.529 "mask": "0x40", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "blobfs": { 00:04:49.529 "mask": "0x80", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "dsa": { 00:04:49.529 "mask": "0x200", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "thread": { 00:04:49.529 "mask": "0x400", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "nvme_pcie": { 00:04:49.529 "mask": "0x800", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "iaa": { 00:04:49.529 "mask": "0x1000", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "nvme_tcp": { 00:04:49.529 "mask": "0x2000", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "bdev_nvme": { 00:04:49.529 "mask": "0x4000", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "sock": { 00:04:49.529 "mask": "0x8000", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "blob": { 00:04:49.529 "mask": "0x10000", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "bdev_raid": { 00:04:49.529 "mask": "0x20000", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 }, 00:04:49.529 "scheduler": { 00:04:49.529 "mask": "0x40000", 00:04:49.529 "tpoint_mask": "0x0" 00:04:49.529 } 00:04:49.529 }' 00:04:49.529 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:49.786 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:49.786 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:49.786 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:49.786 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:49.786 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:49.786 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:49.786 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:49.786 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:49.786 10:26:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:49.786 00:04:49.786 real 0m0.286s 00:04:49.786 user 0m0.247s 00:04:49.786 sys 0m0.029s 00:04:49.786 10:26:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.786 10:26:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:49.786 ************************************ 00:04:49.787 END TEST rpc_trace_cmd_test 00:04:49.787 ************************************ 00:04:50.045 10:26:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:50.045 10:26:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:50.045 10:26:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:50.045 10:26:38 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.045 10:26:38 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.045 10:26:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.045 ************************************ 00:04:50.045 START TEST rpc_daemon_integrity 00:04:50.045 ************************************ 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.045 { 00:04:50.045 "name": "Malloc2", 00:04:50.045 "aliases": [ 00:04:50.045 "07f2126d-70ab-4799-92bd-ac9e72b54055" 00:04:50.045 ], 00:04:50.045 "product_name": "Malloc disk", 00:04:50.045 "block_size": 512, 00:04:50.045 "num_blocks": 16384, 00:04:50.045 "uuid": "07f2126d-70ab-4799-92bd-ac9e72b54055", 00:04:50.045 "assigned_rate_limits": { 00:04:50.045 "rw_ios_per_sec": 0, 00:04:50.045 "rw_mbytes_per_sec": 0, 00:04:50.045 "r_mbytes_per_sec": 0, 00:04:50.045 "w_mbytes_per_sec": 0 00:04:50.045 }, 00:04:50.045 "claimed": false, 00:04:50.045 "zoned": false, 00:04:50.045 "supported_io_types": { 00:04:50.045 "read": true, 00:04:50.045 "write": true, 00:04:50.045 "unmap": true, 00:04:50.045 "flush": true, 00:04:50.045 "reset": true, 00:04:50.045 "nvme_admin": false, 00:04:50.045 "nvme_io": false, 00:04:50.045 "nvme_io_md": false, 00:04:50.045 "write_zeroes": true, 00:04:50.045 "zcopy": true, 00:04:50.045 "get_zone_info": false, 00:04:50.045 "zone_management": false, 00:04:50.045 "zone_append": false, 00:04:50.045 "compare": false, 00:04:50.045 "compare_and_write": false, 00:04:50.045 "abort": true, 00:04:50.045 "seek_hole": false, 00:04:50.045 "seek_data": false, 00:04:50.045 "copy": true, 00:04:50.045 "nvme_iov_md": false 00:04:50.045 }, 00:04:50.045 "memory_domains": [ 00:04:50.045 { 00:04:50.045 "dma_device_id": "system", 00:04:50.045 "dma_device_type": 1 00:04:50.045 }, 00:04:50.045 { 00:04:50.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.045 "dma_device_type": 2 00:04:50.045 } 00:04:50.045 ], 00:04:50.045 "driver_specific": {} 00:04:50.045 } 00:04:50.045 ]' 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.045 [2024-11-12 10:26:38.740411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:50.045 [2024-11-12 10:26:38.740454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.045 [2024-11-12 10:26:38.740486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1edf0e0 00:04:50.045 [2024-11-12 10:26:38.740495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.045 [2024-11-12 10:26:38.741914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.045 [2024-11-12 10:26:38.741959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.045 Passthru0 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.045 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.045 { 00:04:50.045 "name": "Malloc2", 00:04:50.045 "aliases": [ 00:04:50.045 "07f2126d-70ab-4799-92bd-ac9e72b54055" 00:04:50.045 ], 00:04:50.045 "product_name": "Malloc disk", 00:04:50.045 "block_size": 512, 00:04:50.045 "num_blocks": 16384, 00:04:50.045 "uuid": "07f2126d-70ab-4799-92bd-ac9e72b54055", 00:04:50.045 "assigned_rate_limits": { 00:04:50.045 "rw_ios_per_sec": 0, 00:04:50.045 "rw_mbytes_per_sec": 0, 00:04:50.045 "r_mbytes_per_sec": 0, 00:04:50.045 "w_mbytes_per_sec": 0 00:04:50.045 }, 00:04:50.045 "claimed": true, 00:04:50.045 "claim_type": "exclusive_write", 00:04:50.045 "zoned": false, 00:04:50.045 "supported_io_types": { 00:04:50.045 "read": true, 00:04:50.045 "write": true, 00:04:50.045 "unmap": true, 00:04:50.045 "flush": true, 00:04:50.045 "reset": true, 00:04:50.046 "nvme_admin": false, 00:04:50.046 "nvme_io": false, 00:04:50.046 "nvme_io_md": false, 00:04:50.046 "write_zeroes": true, 00:04:50.046 "zcopy": true, 00:04:50.046 "get_zone_info": false, 00:04:50.046 "zone_management": false, 00:04:50.046 "zone_append": false, 00:04:50.046 "compare": false, 00:04:50.046 "compare_and_write": false, 00:04:50.046 "abort": true, 00:04:50.046 "seek_hole": false, 00:04:50.046 "seek_data": false, 00:04:50.046 "copy": true, 00:04:50.046 "nvme_iov_md": false 00:04:50.046 }, 00:04:50.046 "memory_domains": [ 00:04:50.046 { 00:04:50.046 "dma_device_id": "system", 00:04:50.046 "dma_device_type": 1 00:04:50.046 }, 00:04:50.046 { 00:04:50.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.046 "dma_device_type": 2 00:04:50.046 } 00:04:50.046 ], 00:04:50.046 "driver_specific": {} 00:04:50.046 }, 00:04:50.046 { 00:04:50.046 "name": "Passthru0", 00:04:50.046 "aliases": [ 00:04:50.046 "fa39fa5c-62e6-55a4-8b03-75c7745b63bd" 00:04:50.046 ], 00:04:50.046 "product_name": "passthru", 00:04:50.046 "block_size": 512, 00:04:50.046 "num_blocks": 16384, 00:04:50.046 "uuid": "fa39fa5c-62e6-55a4-8b03-75c7745b63bd", 00:04:50.046 "assigned_rate_limits": { 00:04:50.046 "rw_ios_per_sec": 0, 00:04:50.046 "rw_mbytes_per_sec": 0, 00:04:50.046 "r_mbytes_per_sec": 0, 00:04:50.046 "w_mbytes_per_sec": 0 00:04:50.046 }, 00:04:50.046 "claimed": false, 00:04:50.046 "zoned": false, 00:04:50.046 "supported_io_types": { 00:04:50.046 "read": true, 00:04:50.046 "write": true, 00:04:50.046 "unmap": true, 00:04:50.046 "flush": true, 00:04:50.046 "reset": true, 00:04:50.046 "nvme_admin": false, 00:04:50.046 "nvme_io": false, 00:04:50.046 "nvme_io_md": false, 00:04:50.046 "write_zeroes": true, 00:04:50.046 "zcopy": true, 00:04:50.046 "get_zone_info": false, 00:04:50.046 "zone_management": false, 00:04:50.046 "zone_append": false, 00:04:50.046 "compare": false, 00:04:50.046 "compare_and_write": false, 00:04:50.046 "abort": true, 00:04:50.046 "seek_hole": false, 00:04:50.046 "seek_data": false, 00:04:50.046 "copy": true, 00:04:50.046 "nvme_iov_md": false 00:04:50.046 }, 00:04:50.046 "memory_domains": [ 00:04:50.046 { 00:04:50.046 "dma_device_id": "system", 00:04:50.046 "dma_device_type": 1 00:04:50.046 }, 00:04:50.046 { 00:04:50.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.046 "dma_device_type": 2 00:04:50.046 } 00:04:50.046 ], 00:04:50.046 "driver_specific": { 00:04:50.046 "passthru": { 00:04:50.046 "name": "Passthru0", 00:04:50.046 "base_bdev_name": "Malloc2" 00:04:50.046 } 00:04:50.046 } 00:04:50.046 } 00:04:50.046 ]' 00:04:50.046 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.322 00:04:50.322 real 0m0.323s 00:04:50.322 user 0m0.221s 00:04:50.322 sys 0m0.034s 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:50.322 10:26:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.322 ************************************ 00:04:50.322 END TEST rpc_daemon_integrity 00:04:50.322 ************************************ 00:04:50.322 10:26:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:50.322 10:26:38 rpc -- rpc/rpc.sh@84 -- # killprocess 56717 00:04:50.322 10:26:38 rpc -- common/autotest_common.sh@952 -- # '[' -z 56717 ']' 00:04:50.322 10:26:38 rpc -- common/autotest_common.sh@956 -- # kill -0 56717 00:04:50.322 10:26:38 rpc -- common/autotest_common.sh@957 -- # uname 00:04:50.322 10:26:38 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:50.322 10:26:38 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56717 00:04:50.322 10:26:38 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:50.322 10:26:38 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:50.322 killing process with pid 56717 00:04:50.322 10:26:38 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56717' 00:04:50.322 10:26:38 rpc -- common/autotest_common.sh@971 -- # kill 56717 00:04:50.322 10:26:38 rpc -- common/autotest_common.sh@976 -- # wait 56717 00:04:50.580 00:04:50.580 real 0m2.156s 00:04:50.580 user 0m2.939s 00:04:50.580 sys 0m0.536s 00:04:50.580 10:26:39 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:50.580 10:26:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.580 ************************************ 00:04:50.581 END TEST rpc 00:04:50.581 ************************************ 00:04:50.581 10:26:39 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:50.581 10:26:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.581 10:26:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.581 10:26:39 -- common/autotest_common.sh@10 -- # set +x 00:04:50.581 ************************************ 00:04:50.581 START TEST skip_rpc 00:04:50.581 ************************************ 00:04:50.581 10:26:39 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:50.581 * Looking for test storage... 00:04:50.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.839 10:26:39 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.839 10:26:39 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.839 10:26:39 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.839 10:26:39 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.839 10:26:39 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:50.840 10:26:39 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:50.840 10:26:39 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.840 10:26:39 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:50.840 10:26:39 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.840 10:26:39 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.840 10:26:39 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.840 10:26:39 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:50.840 10:26:39 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.840 10:26:39 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.840 --rc genhtml_branch_coverage=1 00:04:50.840 --rc genhtml_function_coverage=1 00:04:50.840 --rc genhtml_legend=1 00:04:50.840 --rc geninfo_all_blocks=1 00:04:50.840 --rc geninfo_unexecuted_blocks=1 00:04:50.840 00:04:50.840 ' 00:04:50.840 10:26:39 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.840 --rc genhtml_branch_coverage=1 00:04:50.840 --rc genhtml_function_coverage=1 00:04:50.840 --rc genhtml_legend=1 00:04:50.840 --rc geninfo_all_blocks=1 00:04:50.840 --rc geninfo_unexecuted_blocks=1 00:04:50.840 00:04:50.840 ' 00:04:50.840 10:26:39 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.840 --rc genhtml_branch_coverage=1 00:04:50.840 --rc genhtml_function_coverage=1 00:04:50.840 --rc genhtml_legend=1 00:04:50.840 --rc geninfo_all_blocks=1 00:04:50.840 --rc geninfo_unexecuted_blocks=1 00:04:50.840 00:04:50.840 ' 00:04:50.840 10:26:39 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.840 --rc genhtml_branch_coverage=1 00:04:50.840 --rc genhtml_function_coverage=1 00:04:50.840 --rc genhtml_legend=1 00:04:50.840 --rc geninfo_all_blocks=1 00:04:50.840 --rc geninfo_unexecuted_blocks=1 00:04:50.840 00:04:50.840 ' 00:04:50.840 10:26:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:50.840 10:26:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:50.840 10:26:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:50.840 10:26:39 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.840 10:26:39 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.840 10:26:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.840 ************************************ 00:04:50.840 START TEST skip_rpc 00:04:50.840 ************************************ 00:04:50.840 10:26:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:50.840 10:26:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56910 00:04:50.840 10:26:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:50.840 10:26:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.840 10:26:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:50.840 [2024-11-12 10:26:39.491661] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:04:50.840 [2024-11-12 10:26:39.491764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56910 ] 00:04:51.097 [2024-11-12 10:26:39.633359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.097 [2024-11-12 10:26:39.663553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.097 [2024-11-12 10:26:39.700546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56910 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56910 ']' 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56910 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56910 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.366 killing process with pid 56910 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56910' 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56910 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56910 00:04:56.366 00:04:56.366 real 0m5.269s 00:04:56.366 user 0m5.020s 00:04:56.366 sys 0m0.166s 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.366 ************************************ 00:04:56.366 END TEST skip_rpc 00:04:56.366 ************************************ 00:04:56.366 10:26:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.366 10:26:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:56.366 10:26:44 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.366 10:26:44 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.366 10:26:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.366 ************************************ 00:04:56.366 START TEST skip_rpc_with_json 00:04:56.366 ************************************ 00:04:56.366 10:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:56.366 10:26:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:56.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.366 10:26:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56996 00:04:56.366 10:26:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.366 10:26:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.366 10:26:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56996 00:04:56.366 10:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 56996 ']' 00:04:56.366 10:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.366 10:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.366 10:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.366 10:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.366 10:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.366 [2024-11-12 10:26:44.819769] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:04:56.366 [2024-11-12 10:26:44.819865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56996 ] 00:04:56.366 [2024-11-12 10:26:44.965659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.366 [2024-11-12 10:26:44.997669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.366 [2024-11-12 10:26:45.037113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.626 [2024-11-12 10:26:45.159891] nvmf_rpc.c:2850:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:56.626 request: 00:04:56.626 { 00:04:56.626 "trtype": "tcp", 00:04:56.626 "method": "nvmf_get_transports", 00:04:56.626 "req_id": 1 00:04:56.626 } 00:04:56.626 Got JSON-RPC error response 00:04:56.626 response: 00:04:56.626 { 00:04:56.626 "code": -19, 00:04:56.626 "message": "No such device" 00:04:56.626 } 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.626 [2024-11-12 10:26:45.171992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.626 10:26:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:56.626 { 00:04:56.626 "subsystems": [ 00:04:56.626 { 00:04:56.626 "subsystem": "fsdev", 00:04:56.626 "config": [ 00:04:56.626 { 00:04:56.626 "method": "fsdev_set_opts", 00:04:56.626 "params": { 00:04:56.626 "fsdev_io_pool_size": 65535, 00:04:56.626 "fsdev_io_cache_size": 256 00:04:56.626 } 00:04:56.626 } 00:04:56.626 ] 00:04:56.626 }, 00:04:56.626 { 00:04:56.626 "subsystem": "keyring", 00:04:56.626 "config": [] 00:04:56.626 }, 00:04:56.626 { 00:04:56.626 "subsystem": "iobuf", 00:04:56.626 "config": [ 00:04:56.626 { 00:04:56.626 "method": "iobuf_set_options", 00:04:56.626 "params": { 00:04:56.626 "small_pool_count": 8192, 00:04:56.626 "large_pool_count": 1024, 00:04:56.626 "small_bufsize": 8192, 00:04:56.626 "large_bufsize": 135168, 00:04:56.626 "enable_numa": false 00:04:56.626 } 00:04:56.626 } 00:04:56.626 ] 00:04:56.626 }, 00:04:56.626 { 00:04:56.626 "subsystem": "sock", 00:04:56.626 "config": [ 00:04:56.626 { 00:04:56.626 "method": "sock_set_default_impl", 00:04:56.626 "params": { 00:04:56.626 "impl_name": "uring" 00:04:56.626 } 00:04:56.626 }, 00:04:56.626 { 00:04:56.626 "method": "sock_impl_set_options", 00:04:56.626 "params": { 00:04:56.626 "impl_name": "ssl", 00:04:56.626 "recv_buf_size": 4096, 00:04:56.626 "send_buf_size": 4096, 00:04:56.626 "enable_recv_pipe": true, 00:04:56.626 "enable_quickack": false, 00:04:56.626 "enable_placement_id": 0, 00:04:56.626 "enable_zerocopy_send_server": true, 00:04:56.626 "enable_zerocopy_send_client": false, 00:04:56.626 "zerocopy_threshold": 0, 00:04:56.626 "tls_version": 0, 00:04:56.626 "enable_ktls": false 00:04:56.626 } 00:04:56.626 }, 00:04:56.626 { 00:04:56.626 "method": "sock_impl_set_options", 00:04:56.626 "params": { 00:04:56.626 "impl_name": "posix", 00:04:56.626 "recv_buf_size": 2097152, 00:04:56.626 "send_buf_size": 2097152, 00:04:56.626 "enable_recv_pipe": true, 00:04:56.626 "enable_quickack": false, 00:04:56.626 "enable_placement_id": 0, 00:04:56.626 "enable_zerocopy_send_server": true, 00:04:56.626 "enable_zerocopy_send_client": false, 00:04:56.626 "zerocopy_threshold": 0, 00:04:56.626 "tls_version": 0, 00:04:56.626 "enable_ktls": false 00:04:56.626 } 00:04:56.626 }, 00:04:56.626 { 00:04:56.626 "method": "sock_impl_set_options", 00:04:56.626 "params": { 00:04:56.626 "impl_name": "uring", 00:04:56.626 "recv_buf_size": 2097152, 00:04:56.626 "send_buf_size": 2097152, 00:04:56.626 "enable_recv_pipe": true, 00:04:56.626 "enable_quickack": false, 00:04:56.626 "enable_placement_id": 0, 00:04:56.626 "enable_zerocopy_send_server": false, 00:04:56.626 "enable_zerocopy_send_client": false, 00:04:56.626 "zerocopy_threshold": 0, 00:04:56.626 "tls_version": 0, 00:04:56.626 "enable_ktls": false 00:04:56.626 } 00:04:56.626 } 00:04:56.626 ] 00:04:56.626 }, 00:04:56.626 { 00:04:56.626 "subsystem": "vmd", 00:04:56.626 "config": [] 00:04:56.626 }, 00:04:56.626 { 00:04:56.626 "subsystem": "accel", 00:04:56.626 "config": [ 00:04:56.626 { 00:04:56.626 "method": "accel_set_options", 00:04:56.626 "params": { 00:04:56.626 "small_cache_size": 128, 00:04:56.626 "large_cache_size": 16, 00:04:56.626 "task_count": 2048, 00:04:56.626 "sequence_count": 2048, 00:04:56.626 "buf_count": 2048 00:04:56.626 } 00:04:56.626 } 00:04:56.626 ] 00:04:56.626 }, 00:04:56.626 { 00:04:56.626 "subsystem": "bdev", 00:04:56.626 "config": [ 00:04:56.626 { 00:04:56.626 "method": "bdev_set_options", 00:04:56.626 "params": { 00:04:56.626 "bdev_io_pool_size": 65535, 00:04:56.626 "bdev_io_cache_size": 256, 00:04:56.626 "bdev_auto_examine": true, 00:04:56.626 "iobuf_small_cache_size": 128, 00:04:56.626 "iobuf_large_cache_size": 16 00:04:56.626 } 00:04:56.626 }, 00:04:56.626 { 00:04:56.626 "method": "bdev_raid_set_options", 00:04:56.626 "params": { 00:04:56.626 "process_window_size_kb": 1024, 00:04:56.626 "process_max_bandwidth_mb_sec": 0 00:04:56.626 } 00:04:56.626 }, 00:04:56.626 { 00:04:56.626 "method": "bdev_iscsi_set_options", 00:04:56.626 "params": { 00:04:56.626 "timeout_sec": 30 00:04:56.626 } 00:04:56.626 }, 00:04:56.626 { 00:04:56.626 "method": "bdev_nvme_set_options", 00:04:56.626 "params": { 00:04:56.626 "action_on_timeout": "none", 00:04:56.626 "timeout_us": 0, 00:04:56.626 "timeout_admin_us": 0, 00:04:56.626 "keep_alive_timeout_ms": 10000, 00:04:56.626 "arbitration_burst": 0, 00:04:56.626 "low_priority_weight": 0, 00:04:56.626 "medium_priority_weight": 0, 00:04:56.626 "high_priority_weight": 0, 00:04:56.626 "nvme_adminq_poll_period_us": 10000, 00:04:56.626 "nvme_ioq_poll_period_us": 0, 00:04:56.626 "io_queue_requests": 0, 00:04:56.626 "delay_cmd_submit": true, 00:04:56.626 "transport_retry_count": 4, 00:04:56.626 "bdev_retry_count": 3, 00:04:56.626 "transport_ack_timeout": 0, 00:04:56.626 "ctrlr_loss_timeout_sec": 0, 00:04:56.626 "reconnect_delay_sec": 0, 00:04:56.626 "fast_io_fail_timeout_sec": 0, 00:04:56.626 "disable_auto_failback": false, 00:04:56.626 "generate_uuids": false, 00:04:56.626 "transport_tos": 0, 00:04:56.626 "nvme_error_stat": false, 00:04:56.626 "rdma_srq_size": 0, 00:04:56.626 "io_path_stat": false, 00:04:56.626 "allow_accel_sequence": false, 00:04:56.626 "rdma_max_cq_size": 0, 00:04:56.626 "rdma_cm_event_timeout_ms": 0, 00:04:56.626 "dhchap_digests": [ 00:04:56.626 "sha256", 00:04:56.626 "sha384", 00:04:56.626 "sha512" 00:04:56.626 ], 00:04:56.626 "dhchap_dhgroups": [ 00:04:56.626 "null", 00:04:56.626 "ffdhe2048", 00:04:56.626 "ffdhe3072", 00:04:56.626 "ffdhe4096", 00:04:56.626 "ffdhe6144", 00:04:56.626 "ffdhe8192" 00:04:56.627 ] 00:04:56.627 } 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "method": "bdev_nvme_set_hotplug", 00:04:56.627 "params": { 00:04:56.627 "period_us": 100000, 00:04:56.627 "enable": false 00:04:56.627 } 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "method": "bdev_wait_for_examine" 00:04:56.627 } 00:04:56.627 ] 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "subsystem": "scsi", 00:04:56.627 "config": null 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "subsystem": "scheduler", 00:04:56.627 "config": [ 00:04:56.627 { 00:04:56.627 "method": "framework_set_scheduler", 00:04:56.627 "params": { 00:04:56.627 "name": "static" 00:04:56.627 } 00:04:56.627 } 00:04:56.627 ] 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "subsystem": "vhost_scsi", 00:04:56.627 "config": [] 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "subsystem": "vhost_blk", 00:04:56.627 "config": [] 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "subsystem": "ublk", 00:04:56.627 "config": [] 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "subsystem": "nbd", 00:04:56.627 "config": [] 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "subsystem": "nvmf", 00:04:56.627 "config": [ 00:04:56.627 { 00:04:56.627 "method": "nvmf_set_config", 00:04:56.627 "params": { 00:04:56.627 "discovery_filter": "match_any", 00:04:56.627 "admin_cmd_passthru": { 00:04:56.627 "identify_ctrlr": false 00:04:56.627 }, 00:04:56.627 "dhchap_digests": [ 00:04:56.627 "sha256", 00:04:56.627 "sha384", 00:04:56.627 "sha512" 00:04:56.627 ], 00:04:56.627 "dhchap_dhgroups": [ 00:04:56.627 "null", 00:04:56.627 "ffdhe2048", 00:04:56.627 "ffdhe3072", 00:04:56.627 "ffdhe4096", 00:04:56.627 "ffdhe6144", 00:04:56.627 "ffdhe8192" 00:04:56.627 ] 00:04:56.627 } 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "method": "nvmf_set_max_subsystems", 00:04:56.627 "params": { 00:04:56.627 "max_subsystems": 1024 00:04:56.627 } 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "method": "nvmf_set_crdt", 00:04:56.627 "params": { 00:04:56.627 "crdt1": 0, 00:04:56.627 "crdt2": 0, 00:04:56.627 "crdt3": 0 00:04:56.627 } 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "method": "nvmf_create_transport", 00:04:56.627 "params": { 00:04:56.627 "trtype": "TCP", 00:04:56.627 "max_queue_depth": 128, 00:04:56.627 "max_io_qpairs_per_ctrlr": 127, 00:04:56.627 "in_capsule_data_size": 4096, 00:04:56.627 "max_io_size": 131072, 00:04:56.627 "io_unit_size": 131072, 00:04:56.627 "max_aq_depth": 128, 00:04:56.627 "num_shared_buffers": 511, 00:04:56.627 "buf_cache_size": 4294967295, 00:04:56.627 "dif_insert_or_strip": false, 00:04:56.627 "zcopy": false, 00:04:56.627 "c2h_success": true, 00:04:56.627 "sock_priority": 0, 00:04:56.627 "abort_timeout_sec": 1, 00:04:56.627 "ack_timeout": 0, 00:04:56.627 "data_wr_pool_size": 0 00:04:56.627 } 00:04:56.627 } 00:04:56.627 ] 00:04:56.627 }, 00:04:56.627 { 00:04:56.627 "subsystem": "iscsi", 00:04:56.627 "config": [ 00:04:56.627 { 00:04:56.627 "method": "iscsi_set_options", 00:04:56.627 "params": { 00:04:56.627 "node_base": "iqn.2016-06.io.spdk", 00:04:56.627 "max_sessions": 128, 00:04:56.627 "max_connections_per_session": 2, 00:04:56.627 "max_queue_depth": 64, 00:04:56.627 "default_time2wait": 2, 00:04:56.627 "default_time2retain": 20, 00:04:56.627 "first_burst_length": 8192, 00:04:56.627 "immediate_data": true, 00:04:56.627 "allow_duplicated_isid": false, 00:04:56.627 "error_recovery_level": 0, 00:04:56.627 "nop_timeout": 60, 00:04:56.627 "nop_in_interval": 30, 00:04:56.627 "disable_chap": false, 00:04:56.627 "require_chap": false, 00:04:56.627 "mutual_chap": false, 00:04:56.627 "chap_group": 0, 00:04:56.627 "max_large_datain_per_connection": 64, 00:04:56.627 "max_r2t_per_connection": 4, 00:04:56.627 "pdu_pool_size": 36864, 00:04:56.627 "immediate_data_pool_size": 16384, 00:04:56.627 "data_out_pool_size": 2048 00:04:56.627 } 00:04:56.627 } 00:04:56.627 ] 00:04:56.627 } 00:04:56.627 ] 00:04:56.627 } 00:04:56.627 10:26:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:56.627 10:26:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56996 00:04:56.627 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56996 ']' 00:04:56.627 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56996 00:04:56.627 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:56.627 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:56.627 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56996 00:04:56.885 killing process with pid 56996 00:04:56.885 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.885 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.885 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56996' 00:04:56.885 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56996 00:04:56.885 10:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56996 00:04:56.885 10:26:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57011 00:04:56.885 10:26:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:56.885 10:26:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57011 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57011 ']' 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57011 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57011 00:05:02.161 killing process with pid 57011 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57011' 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57011 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57011 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:02.161 00:05:02.161 real 0m6.139s 00:05:02.161 user 0m5.915s 00:05:02.161 sys 0m0.402s 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.161 10:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.161 ************************************ 00:05:02.161 END TEST skip_rpc_with_json 00:05:02.161 ************************************ 00:05:02.421 10:26:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:02.421 10:26:50 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:02.421 10:26:50 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.421 10:26:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.421 ************************************ 00:05:02.421 START TEST skip_rpc_with_delay 00:05:02.421 ************************************ 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:02.421 10:26:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.421 [2024-11-12 10:26:50.997195] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:02.421 10:26:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:02.421 10:26:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:02.421 10:26:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:02.421 10:26:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:02.421 00:05:02.421 real 0m0.069s 00:05:02.421 user 0m0.040s 00:05:02.421 sys 0m0.029s 00:05:02.421 ************************************ 00:05:02.421 END TEST skip_rpc_with_delay 00:05:02.421 ************************************ 00:05:02.421 10:26:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.421 10:26:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:02.421 10:26:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:02.421 10:26:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:02.421 10:26:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:02.421 10:26:51 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:02.421 10:26:51 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.421 10:26:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.421 ************************************ 00:05:02.421 START TEST exit_on_failed_rpc_init 00:05:02.421 ************************************ 00:05:02.421 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:02.421 10:26:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57120 00:05:02.421 10:26:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.421 10:26:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57120 00:05:02.421 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57120 ']' 00:05:02.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.421 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.421 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:02.421 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.421 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:02.421 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.421 [2024-11-12 10:26:51.130040] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:02.421 [2024-11-12 10:26:51.130368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57120 ] 00:05:02.681 [2024-11-12 10:26:51.276913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.681 [2024-11-12 10:26:51.305336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.681 [2024-11-12 10:26:51.343068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:02.940 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.940 [2024-11-12 10:26:51.518592] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:02.940 [2024-11-12 10:26:51.518837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57131 ] 00:05:02.940 [2024-11-12 10:26:51.658881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.940 [2024-11-12 10:26:51.689351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.940 [2024-11-12 10:26:51.689647] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:02.940 [2024-11-12 10:26:51.689775] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:02.940 [2024-11-12 10:26:51.689872] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57120 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57120 ']' 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57120 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57120 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57120' 00:05:03.199 killing process with pid 57120 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57120 00:05:03.199 10:26:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57120 00:05:03.459 00:05:03.459 real 0m0.936s 00:05:03.459 user 0m1.083s 00:05:03.459 sys 0m0.248s 00:05:03.459 10:26:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.459 ************************************ 00:05:03.459 END TEST exit_on_failed_rpc_init 00:05:03.459 ************************************ 00:05:03.459 10:26:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.459 10:26:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:03.459 ************************************ 00:05:03.459 END TEST skip_rpc 00:05:03.459 ************************************ 00:05:03.459 00:05:03.459 real 0m12.779s 00:05:03.459 user 0m12.225s 00:05:03.459 sys 0m1.036s 00:05:03.459 10:26:52 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.459 10:26:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.459 10:26:52 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:03.459 10:26:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.459 10:26:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.459 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.459 ************************************ 00:05:03.459 START TEST rpc_client 00:05:03.459 ************************************ 00:05:03.459 10:26:52 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:03.459 * Looking for test storage... 00:05:03.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:03.459 10:26:52 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.459 10:26:52 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.459 10:26:52 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.718 10:26:52 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.718 10:26:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:03.718 10:26:52 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.718 10:26:52 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.718 --rc genhtml_branch_coverage=1 00:05:03.718 --rc genhtml_function_coverage=1 00:05:03.718 --rc genhtml_legend=1 00:05:03.718 --rc geninfo_all_blocks=1 00:05:03.718 --rc geninfo_unexecuted_blocks=1 00:05:03.718 00:05:03.718 ' 00:05:03.718 10:26:52 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.718 --rc genhtml_branch_coverage=1 00:05:03.718 --rc genhtml_function_coverage=1 00:05:03.718 --rc genhtml_legend=1 00:05:03.718 --rc geninfo_all_blocks=1 00:05:03.718 --rc geninfo_unexecuted_blocks=1 00:05:03.719 00:05:03.719 ' 00:05:03.719 10:26:52 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.719 --rc genhtml_branch_coverage=1 00:05:03.719 --rc genhtml_function_coverage=1 00:05:03.719 --rc genhtml_legend=1 00:05:03.719 --rc geninfo_all_blocks=1 00:05:03.719 --rc geninfo_unexecuted_blocks=1 00:05:03.719 00:05:03.719 ' 00:05:03.719 10:26:52 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.719 --rc genhtml_branch_coverage=1 00:05:03.719 --rc genhtml_function_coverage=1 00:05:03.719 --rc genhtml_legend=1 00:05:03.719 --rc geninfo_all_blocks=1 00:05:03.719 --rc geninfo_unexecuted_blocks=1 00:05:03.719 00:05:03.719 ' 00:05:03.719 10:26:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:03.719 OK 00:05:03.719 10:26:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:03.719 00:05:03.719 real 0m0.209s 00:05:03.719 user 0m0.131s 00:05:03.719 sys 0m0.085s 00:05:03.719 ************************************ 00:05:03.719 END TEST rpc_client 00:05:03.719 ************************************ 00:05:03.719 10:26:52 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.719 10:26:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:03.719 10:26:52 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:03.719 10:26:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.719 10:26:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.719 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.719 ************************************ 00:05:03.719 START TEST json_config 00:05:03.719 ************************************ 00:05:03.719 10:26:52 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:03.719 10:26:52 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.719 10:26:52 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.719 10:26:52 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.978 10:26:52 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.978 10:26:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.978 10:26:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.978 10:26:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.978 10:26:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.978 10:26:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.978 10:26:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.978 10:26:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.978 10:26:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.978 10:26:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.979 10:26:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.979 10:26:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.979 10:26:52 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:03.979 10:26:52 json_config -- scripts/common.sh@345 -- # : 1 00:05:03.979 10:26:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.979 10:26:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.979 10:26:52 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:03.979 10:26:52 json_config -- scripts/common.sh@353 -- # local d=1 00:05:03.979 10:26:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.979 10:26:52 json_config -- scripts/common.sh@355 -- # echo 1 00:05:03.979 10:26:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.979 10:26:52 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:03.979 10:26:52 json_config -- scripts/common.sh@353 -- # local d=2 00:05:03.979 10:26:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.979 10:26:52 json_config -- scripts/common.sh@355 -- # echo 2 00:05:03.979 10:26:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.979 10:26:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.979 10:26:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.979 10:26:52 json_config -- scripts/common.sh@368 -- # return 0 00:05:03.979 10:26:52 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.979 10:26:52 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.979 --rc genhtml_branch_coverage=1 00:05:03.979 --rc genhtml_function_coverage=1 00:05:03.979 --rc genhtml_legend=1 00:05:03.979 --rc geninfo_all_blocks=1 00:05:03.979 --rc geninfo_unexecuted_blocks=1 00:05:03.979 00:05:03.979 ' 00:05:03.979 10:26:52 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.979 --rc genhtml_branch_coverage=1 00:05:03.979 --rc genhtml_function_coverage=1 00:05:03.979 --rc genhtml_legend=1 00:05:03.979 --rc geninfo_all_blocks=1 00:05:03.979 --rc geninfo_unexecuted_blocks=1 00:05:03.979 00:05:03.979 ' 00:05:03.979 10:26:52 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.979 --rc genhtml_branch_coverage=1 00:05:03.979 --rc genhtml_function_coverage=1 00:05:03.979 --rc genhtml_legend=1 00:05:03.979 --rc geninfo_all_blocks=1 00:05:03.979 --rc geninfo_unexecuted_blocks=1 00:05:03.979 00:05:03.979 ' 00:05:03.979 10:26:52 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.979 --rc genhtml_branch_coverage=1 00:05:03.979 --rc genhtml_function_coverage=1 00:05:03.979 --rc genhtml_legend=1 00:05:03.979 --rc geninfo_all_blocks=1 00:05:03.979 --rc geninfo_unexecuted_blocks=1 00:05:03.979 00:05:03.979 ' 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:03.979 10:26:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.979 10:26:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.979 10:26:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.979 10:26:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.979 10:26:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.979 10:26:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.979 10:26:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.979 10:26:52 json_config -- paths/export.sh@5 -- # export PATH 00:05:03.979 10:26:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@51 -- # : 0 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.979 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.979 10:26:52 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:03.979 INFO: JSON configuration test init 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:03.979 10:26:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.979 10:26:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:03.979 10:26:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.979 10:26:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.979 Waiting for target to run... 00:05:03.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.979 10:26:52 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:03.979 10:26:52 json_config -- json_config/common.sh@9 -- # local app=target 00:05:03.979 10:26:52 json_config -- json_config/common.sh@10 -- # shift 00:05:03.979 10:26:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.979 10:26:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.979 10:26:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.979 10:26:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.979 10:26:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.979 10:26:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57265 00:05:03.979 10:26:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.979 10:26:52 json_config -- json_config/common.sh@25 -- # waitforlisten 57265 /var/tmp/spdk_tgt.sock 00:05:03.979 10:26:52 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:03.980 10:26:52 json_config -- common/autotest_common.sh@833 -- # '[' -z 57265 ']' 00:05:03.980 10:26:52 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.980 10:26:52 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.980 10:26:52 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.980 10:26:52 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.980 10:26:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.980 [2024-11-12 10:26:52.628848] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:03.980 [2024-11-12 10:26:52.629470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57265 ] 00:05:04.238 [2024-11-12 10:26:52.941945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.238 [2024-11-12 10:26:52.968062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.175 10:26:53 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.175 10:26:53 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:05.175 00:05:05.175 10:26:53 json_config -- json_config/common.sh@26 -- # echo '' 00:05:05.175 10:26:53 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:05.175 10:26:53 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:05.175 10:26:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.175 10:26:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.175 10:26:53 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:05.175 10:26:53 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:05.175 10:26:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.175 10:26:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.175 10:26:53 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:05.175 10:26:53 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:05.175 10:26:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:05.434 [2024-11-12 10:26:53.960474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:05.434 10:26:54 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:05.434 10:26:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:05.434 10:26:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.434 10:26:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.434 10:26:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:05.434 10:26:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:05.434 10:26:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:05.434 10:26:54 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:05.434 10:26:54 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:05.434 10:26:54 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:05.434 10:26:54 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:05.434 10:26:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:05.693 10:26:54 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:05.693 10:26:54 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:05.693 10:26:54 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:05.693 10:26:54 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:05.693 10:26:54 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:05.693 10:26:54 json_config -- json_config/json_config.sh@54 -- # sort 00:05:05.693 10:26:54 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:05.693 10:26:54 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:05.693 10:26:54 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:05.693 10:26:54 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:05.693 10:26:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.693 10:26:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.951 10:26:54 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:05.951 10:26:54 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:05.951 10:26:54 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:05.951 10:26:54 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:05.951 10:26:54 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:05.951 10:26:54 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:05.951 10:26:54 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:05.951 10:26:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.951 10:26:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.951 10:26:54 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:05.951 10:26:54 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:05.951 10:26:54 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:05.952 10:26:54 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.952 10:26:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.952 MallocForNvmf0 00:05:05.952 10:26:54 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.952 10:26:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:06.224 MallocForNvmf1 00:05:06.568 10:26:54 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:06.568 10:26:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:06.568 [2024-11-12 10:26:55.191242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.568 10:26:55 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.568 10:26:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.861 10:26:55 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:06.861 10:26:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:07.120 10:26:55 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:07.120 10:26:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:07.379 10:26:55 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:07.379 10:26:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:07.638 [2024-11-12 10:26:56.215729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:07.638 10:26:56 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:07.638 10:26:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:07.638 10:26:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.638 10:26:56 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:07.638 10:26:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:07.638 10:26:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.638 10:26:56 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:07.638 10:26:56 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:07.638 10:26:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:07.897 MallocBdevForConfigChangeCheck 00:05:07.897 10:26:56 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:07.897 10:26:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:07.897 10:26:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.897 10:26:56 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:07.897 10:26:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.464 INFO: shutting down applications... 00:05:08.464 10:26:57 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:08.464 10:26:57 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:08.464 10:26:57 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:08.464 10:26:57 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:08.464 10:26:57 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:08.722 Calling clear_iscsi_subsystem 00:05:08.722 Calling clear_nvmf_subsystem 00:05:08.722 Calling clear_nbd_subsystem 00:05:08.722 Calling clear_ublk_subsystem 00:05:08.722 Calling clear_vhost_blk_subsystem 00:05:08.722 Calling clear_vhost_scsi_subsystem 00:05:08.722 Calling clear_bdev_subsystem 00:05:08.722 10:26:57 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:08.722 10:26:57 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:08.722 10:26:57 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:08.722 10:26:57 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.723 10:26:57 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:08.723 10:26:57 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:08.982 10:26:57 json_config -- json_config/json_config.sh@352 -- # break 00:05:08.982 10:26:57 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:08.982 10:26:57 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:08.982 10:26:57 json_config -- json_config/common.sh@31 -- # local app=target 00:05:08.982 10:26:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.982 10:26:57 json_config -- json_config/common.sh@35 -- # [[ -n 57265 ]] 00:05:08.982 10:26:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57265 00:05:08.982 10:26:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.982 10:26:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.982 10:26:57 json_config -- json_config/common.sh@41 -- # kill -0 57265 00:05:08.982 10:26:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.550 10:26:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.550 10:26:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.550 10:26:58 json_config -- json_config/common.sh@41 -- # kill -0 57265 00:05:09.550 10:26:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:09.550 10:26:58 json_config -- json_config/common.sh@43 -- # break 00:05:09.550 SPDK target shutdown done 00:05:09.550 10:26:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:09.550 10:26:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:09.550 INFO: relaunching applications... 00:05:09.550 10:26:58 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:09.550 10:26:58 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.550 10:26:58 json_config -- json_config/common.sh@9 -- # local app=target 00:05:09.550 10:26:58 json_config -- json_config/common.sh@10 -- # shift 00:05:09.550 10:26:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.550 10:26:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.550 10:26:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.550 10:26:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.550 10:26:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.550 10:26:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57460 00:05:09.550 Waiting for target to run... 00:05:09.550 10:26:58 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.550 10:26:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.550 10:26:58 json_config -- json_config/common.sh@25 -- # waitforlisten 57460 /var/tmp/spdk_tgt.sock 00:05:09.550 10:26:58 json_config -- common/autotest_common.sh@833 -- # '[' -z 57460 ']' 00:05:09.550 10:26:58 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.550 10:26:58 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.550 10:26:58 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.550 10:26:58 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.550 10:26:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.550 [2024-11-12 10:26:58.263311] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:09.550 [2024-11-12 10:26:58.263844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57460 ] 00:05:09.810 [2024-11-12 10:26:58.551986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.069 [2024-11-12 10:26:58.577627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.069 [2024-11-12 10:26:58.707520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:10.328 [2024-11-12 10:26:58.901953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.328 [2024-11-12 10:26:58.934021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.586 10:26:59 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.586 10:26:59 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:10.586 00:05:10.587 10:26:59 json_config -- json_config/common.sh@26 -- # echo '' 00:05:10.587 10:26:59 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:10.587 INFO: Checking if target configuration is the same... 00:05:10.587 10:26:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:10.587 10:26:59 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:10.587 10:26:59 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.587 10:26:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.587 + '[' 2 -ne 2 ']' 00:05:10.587 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:10.587 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:10.587 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:10.587 +++ basename /dev/fd/62 00:05:10.587 ++ mktemp /tmp/62.XXX 00:05:10.587 + tmp_file_1=/tmp/62.pwx 00:05:10.587 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.587 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:10.587 + tmp_file_2=/tmp/spdk_tgt_config.json.AGH 00:05:10.587 + ret=0 00:05:10.587 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.154 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.154 + diff -u /tmp/62.pwx /tmp/spdk_tgt_config.json.AGH 00:05:11.154 INFO: JSON config files are the same 00:05:11.154 + echo 'INFO: JSON config files are the same' 00:05:11.154 + rm /tmp/62.pwx /tmp/spdk_tgt_config.json.AGH 00:05:11.154 + exit 0 00:05:11.154 10:26:59 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:11.154 INFO: changing configuration and checking if this can be detected... 00:05:11.154 10:26:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:11.154 10:26:59 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.154 10:26:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.413 10:27:00 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.413 10:27:00 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:11.413 10:27:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.413 + '[' 2 -ne 2 ']' 00:05:11.413 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:11.413 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:11.413 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:11.413 +++ basename /dev/fd/62 00:05:11.413 ++ mktemp /tmp/62.XXX 00:05:11.413 + tmp_file_1=/tmp/62.NEp 00:05:11.413 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.413 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.413 + tmp_file_2=/tmp/spdk_tgt_config.json.XOw 00:05:11.413 + ret=0 00:05:11.413 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.982 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.982 + diff -u /tmp/62.NEp /tmp/spdk_tgt_config.json.XOw 00:05:11.982 + ret=1 00:05:11.982 + echo '=== Start of file: /tmp/62.NEp ===' 00:05:11.982 + cat /tmp/62.NEp 00:05:11.982 + echo '=== End of file: /tmp/62.NEp ===' 00:05:11.982 + echo '' 00:05:11.982 + echo '=== Start of file: /tmp/spdk_tgt_config.json.XOw ===' 00:05:11.982 + cat /tmp/spdk_tgt_config.json.XOw 00:05:11.982 + echo '=== End of file: /tmp/spdk_tgt_config.json.XOw ===' 00:05:11.982 + echo '' 00:05:11.982 + rm /tmp/62.NEp /tmp/spdk_tgt_config.json.XOw 00:05:11.982 + exit 1 00:05:11.982 INFO: configuration change detected. 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@324 -- # [[ -n 57460 ]] 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.982 10:27:00 json_config -- json_config/json_config.sh@330 -- # killprocess 57460 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@952 -- # '[' -z 57460 ']' 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@956 -- # kill -0 57460 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@957 -- # uname 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57460 00:05:11.982 killing process with pid 57460 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57460' 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@971 -- # kill 57460 00:05:11.982 10:27:00 json_config -- common/autotest_common.sh@976 -- # wait 57460 00:05:12.242 10:27:00 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:12.242 10:27:00 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:12.242 10:27:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:12.242 10:27:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.242 10:27:00 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:12.242 10:27:00 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:12.242 INFO: Success 00:05:12.242 ************************************ 00:05:12.242 END TEST json_config 00:05:12.242 ************************************ 00:05:12.242 00:05:12.242 real 0m8.466s 00:05:12.242 user 0m12.380s 00:05:12.242 sys 0m1.427s 00:05:12.242 10:27:00 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.242 10:27:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.242 10:27:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:12.242 10:27:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:12.242 10:27:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.242 10:27:00 -- common/autotest_common.sh@10 -- # set +x 00:05:12.242 ************************************ 00:05:12.242 START TEST json_config_extra_key 00:05:12.242 ************************************ 00:05:12.242 10:27:00 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:12.242 10:27:00 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.242 10:27:00 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.242 10:27:00 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.501 10:27:01 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:12.502 10:27:01 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.502 10:27:01 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.502 --rc genhtml_branch_coverage=1 00:05:12.502 --rc genhtml_function_coverage=1 00:05:12.502 --rc genhtml_legend=1 00:05:12.502 --rc geninfo_all_blocks=1 00:05:12.502 --rc geninfo_unexecuted_blocks=1 00:05:12.502 00:05:12.502 ' 00:05:12.502 10:27:01 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.502 --rc genhtml_branch_coverage=1 00:05:12.502 --rc genhtml_function_coverage=1 00:05:12.502 --rc genhtml_legend=1 00:05:12.502 --rc geninfo_all_blocks=1 00:05:12.502 --rc geninfo_unexecuted_blocks=1 00:05:12.502 00:05:12.502 ' 00:05:12.502 10:27:01 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.502 --rc genhtml_branch_coverage=1 00:05:12.502 --rc genhtml_function_coverage=1 00:05:12.502 --rc genhtml_legend=1 00:05:12.502 --rc geninfo_all_blocks=1 00:05:12.502 --rc geninfo_unexecuted_blocks=1 00:05:12.502 00:05:12.502 ' 00:05:12.502 10:27:01 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.502 --rc genhtml_branch_coverage=1 00:05:12.502 --rc genhtml_function_coverage=1 00:05:12.502 --rc genhtml_legend=1 00:05:12.502 --rc geninfo_all_blocks=1 00:05:12.502 --rc geninfo_unexecuted_blocks=1 00:05:12.502 00:05:12.502 ' 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.502 10:27:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.502 10:27:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.502 10:27:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.502 10:27:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.502 10:27:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:12.502 10:27:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.502 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.502 10:27:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:12.502 INFO: launching applications... 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:12.502 10:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:12.502 10:27:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:12.502 10:27:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:12.502 10:27:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.502 10:27:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.502 10:27:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.502 10:27:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.502 10:27:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.502 10:27:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57609 00:05:12.503 10:27:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.503 Waiting for target to run... 00:05:12.503 10:27:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:12.503 10:27:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57609 /var/tmp/spdk_tgt.sock 00:05:12.503 10:27:01 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57609 ']' 00:05:12.503 10:27:01 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.503 10:27:01 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.503 10:27:01 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.503 10:27:01 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.503 10:27:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.503 [2024-11-12 10:27:01.169463] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:12.503 [2024-11-12 10:27:01.169770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57609 ] 00:05:12.762 [2024-11-12 10:27:01.484362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.762 [2024-11-12 10:27:01.515725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.021 [2024-11-12 10:27:01.544722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.589 00:05:13.589 INFO: shutting down applications... 00:05:13.590 10:27:02 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:13.590 10:27:02 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:13.590 10:27:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:13.590 10:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:13.590 10:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:13.590 10:27:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:13.590 10:27:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.590 10:27:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57609 ]] 00:05:13.590 10:27:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57609 00:05:13.590 10:27:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.590 10:27:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.590 10:27:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57609 00:05:13.590 10:27:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.158 10:27:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.158 10:27:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.158 10:27:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57609 00:05:14.158 10:27:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.158 10:27:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:14.158 10:27:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.158 10:27:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.158 SPDK target shutdown done 00:05:14.158 10:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:14.158 Success 00:05:14.158 00:05:14.158 real 0m1.876s 00:05:14.158 user 0m1.776s 00:05:14.158 sys 0m0.336s 00:05:14.158 ************************************ 00:05:14.158 END TEST json_config_extra_key 00:05:14.158 ************************************ 00:05:14.158 10:27:02 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.158 10:27:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:14.158 10:27:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.158 10:27:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.158 10:27:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.158 10:27:02 -- common/autotest_common.sh@10 -- # set +x 00:05:14.158 ************************************ 00:05:14.158 START TEST alias_rpc 00:05:14.158 ************************************ 00:05:14.158 10:27:02 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.159 * Looking for test storage... 00:05:14.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:14.159 10:27:02 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.159 10:27:02 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.159 10:27:02 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.418 10:27:02 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.418 10:27:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:14.418 10:27:02 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.418 10:27:02 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.418 --rc genhtml_branch_coverage=1 00:05:14.418 --rc genhtml_function_coverage=1 00:05:14.418 --rc genhtml_legend=1 00:05:14.418 --rc geninfo_all_blocks=1 00:05:14.418 --rc geninfo_unexecuted_blocks=1 00:05:14.418 00:05:14.418 ' 00:05:14.418 10:27:02 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.418 --rc genhtml_branch_coverage=1 00:05:14.418 --rc genhtml_function_coverage=1 00:05:14.418 --rc genhtml_legend=1 00:05:14.418 --rc geninfo_all_blocks=1 00:05:14.418 --rc geninfo_unexecuted_blocks=1 00:05:14.418 00:05:14.418 ' 00:05:14.418 10:27:02 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.418 --rc genhtml_branch_coverage=1 00:05:14.418 --rc genhtml_function_coverage=1 00:05:14.418 --rc genhtml_legend=1 00:05:14.418 --rc geninfo_all_blocks=1 00:05:14.418 --rc geninfo_unexecuted_blocks=1 00:05:14.418 00:05:14.418 ' 00:05:14.418 10:27:02 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.418 --rc genhtml_branch_coverage=1 00:05:14.418 --rc genhtml_function_coverage=1 00:05:14.418 --rc genhtml_legend=1 00:05:14.418 --rc geninfo_all_blocks=1 00:05:14.418 --rc geninfo_unexecuted_blocks=1 00:05:14.418 00:05:14.418 ' 00:05:14.418 10:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:14.418 10:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57687 00:05:14.418 10:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57687 00:05:14.418 10:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.418 10:27:02 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57687 ']' 00:05:14.418 10:27:02 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.418 10:27:02 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.418 10:27:02 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.418 10:27:02 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.418 10:27:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.418 [2024-11-12 10:27:03.038032] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:14.418 [2024-11-12 10:27:03.038649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57687 ] 00:05:14.676 [2024-11-12 10:27:03.185269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.676 [2024-11-12 10:27:03.215211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.676 [2024-11-12 10:27:03.254776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.676 10:27:03 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:14.676 10:27:03 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:14.676 10:27:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:15.245 10:27:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57687 00:05:15.245 10:27:03 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57687 ']' 00:05:15.245 10:27:03 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57687 00:05:15.245 10:27:03 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:15.245 10:27:03 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:15.245 10:27:03 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57687 00:05:15.245 killing process with pid 57687 00:05:15.245 10:27:03 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:15.245 10:27:03 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:15.245 10:27:03 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57687' 00:05:15.245 10:27:03 alias_rpc -- common/autotest_common.sh@971 -- # kill 57687 00:05:15.245 10:27:03 alias_rpc -- common/autotest_common.sh@976 -- # wait 57687 00:05:15.245 00:05:15.245 real 0m1.168s 00:05:15.245 user 0m1.365s 00:05:15.245 sys 0m0.317s 00:05:15.245 ************************************ 00:05:15.245 END TEST alias_rpc 00:05:15.245 ************************************ 00:05:15.245 10:27:03 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.245 10:27:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.245 10:27:03 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:15.245 10:27:03 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:15.245 10:27:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.245 10:27:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.245 10:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:15.504 ************************************ 00:05:15.504 START TEST spdkcli_tcp 00:05:15.504 ************************************ 00:05:15.504 10:27:04 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:15.504 * Looking for test storage... 00:05:15.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:15.504 10:27:04 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.504 10:27:04 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.504 10:27:04 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.504 10:27:04 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.504 10:27:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.505 --rc genhtml_branch_coverage=1 00:05:15.505 --rc genhtml_function_coverage=1 00:05:15.505 --rc genhtml_legend=1 00:05:15.505 --rc geninfo_all_blocks=1 00:05:15.505 --rc geninfo_unexecuted_blocks=1 00:05:15.505 00:05:15.505 ' 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.505 --rc genhtml_branch_coverage=1 00:05:15.505 --rc genhtml_function_coverage=1 00:05:15.505 --rc genhtml_legend=1 00:05:15.505 --rc geninfo_all_blocks=1 00:05:15.505 --rc geninfo_unexecuted_blocks=1 00:05:15.505 00:05:15.505 ' 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.505 --rc genhtml_branch_coverage=1 00:05:15.505 --rc genhtml_function_coverage=1 00:05:15.505 --rc genhtml_legend=1 00:05:15.505 --rc geninfo_all_blocks=1 00:05:15.505 --rc geninfo_unexecuted_blocks=1 00:05:15.505 00:05:15.505 ' 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.505 --rc genhtml_branch_coverage=1 00:05:15.505 --rc genhtml_function_coverage=1 00:05:15.505 --rc genhtml_legend=1 00:05:15.505 --rc geninfo_all_blocks=1 00:05:15.505 --rc geninfo_unexecuted_blocks=1 00:05:15.505 00:05:15.505 ' 00:05:15.505 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:15.505 10:27:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:15.505 10:27:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:15.505 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:15.505 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:15.505 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:15.505 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.505 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57758 00:05:15.505 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:15.505 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57758 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57758 ']' 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.505 10:27:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.505 [2024-11-12 10:27:04.262119] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:15.505 [2024-11-12 10:27:04.262440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57758 ] 00:05:15.764 [2024-11-12 10:27:04.407924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.764 [2024-11-12 10:27:04.441937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.764 [2024-11-12 10:27:04.441945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.764 [2024-11-12 10:27:04.479181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.023 10:27:04 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.023 10:27:04 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:16.023 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57767 00:05:16.023 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:16.023 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:16.283 [ 00:05:16.283 "bdev_malloc_delete", 00:05:16.283 "bdev_malloc_create", 00:05:16.283 "bdev_null_resize", 00:05:16.283 "bdev_null_delete", 00:05:16.283 "bdev_null_create", 00:05:16.283 "bdev_nvme_cuse_unregister", 00:05:16.283 "bdev_nvme_cuse_register", 00:05:16.283 "bdev_opal_new_user", 00:05:16.283 "bdev_opal_set_lock_state", 00:05:16.283 "bdev_opal_delete", 00:05:16.283 "bdev_opal_get_info", 00:05:16.283 "bdev_opal_create", 00:05:16.283 "bdev_nvme_opal_revert", 00:05:16.283 "bdev_nvme_opal_init", 00:05:16.283 "bdev_nvme_send_cmd", 00:05:16.283 "bdev_nvme_set_keys", 00:05:16.283 "bdev_nvme_get_path_iostat", 00:05:16.283 "bdev_nvme_get_mdns_discovery_info", 00:05:16.283 "bdev_nvme_stop_mdns_discovery", 00:05:16.283 "bdev_nvme_start_mdns_discovery", 00:05:16.283 "bdev_nvme_set_multipath_policy", 00:05:16.283 "bdev_nvme_set_preferred_path", 00:05:16.283 "bdev_nvme_get_io_paths", 00:05:16.283 "bdev_nvme_remove_error_injection", 00:05:16.283 "bdev_nvme_add_error_injection", 00:05:16.283 "bdev_nvme_get_discovery_info", 00:05:16.283 "bdev_nvme_stop_discovery", 00:05:16.283 "bdev_nvme_start_discovery", 00:05:16.283 "bdev_nvme_get_controller_health_info", 00:05:16.283 "bdev_nvme_disable_controller", 00:05:16.283 "bdev_nvme_enable_controller", 00:05:16.283 "bdev_nvme_reset_controller", 00:05:16.283 "bdev_nvme_get_transport_statistics", 00:05:16.283 "bdev_nvme_apply_firmware", 00:05:16.284 "bdev_nvme_detach_controller", 00:05:16.284 "bdev_nvme_get_controllers", 00:05:16.284 "bdev_nvme_attach_controller", 00:05:16.284 "bdev_nvme_set_hotplug", 00:05:16.284 "bdev_nvme_set_options", 00:05:16.284 "bdev_passthru_delete", 00:05:16.284 "bdev_passthru_create", 00:05:16.284 "bdev_lvol_set_parent_bdev", 00:05:16.284 "bdev_lvol_set_parent", 00:05:16.284 "bdev_lvol_check_shallow_copy", 00:05:16.284 "bdev_lvol_start_shallow_copy", 00:05:16.284 "bdev_lvol_grow_lvstore", 00:05:16.284 "bdev_lvol_get_lvols", 00:05:16.284 "bdev_lvol_get_lvstores", 00:05:16.284 "bdev_lvol_delete", 00:05:16.284 "bdev_lvol_set_read_only", 00:05:16.284 "bdev_lvol_resize", 00:05:16.284 "bdev_lvol_decouple_parent", 00:05:16.284 "bdev_lvol_inflate", 00:05:16.284 "bdev_lvol_rename", 00:05:16.284 "bdev_lvol_clone_bdev", 00:05:16.284 "bdev_lvol_clone", 00:05:16.284 "bdev_lvol_snapshot", 00:05:16.284 "bdev_lvol_create", 00:05:16.284 "bdev_lvol_delete_lvstore", 00:05:16.284 "bdev_lvol_rename_lvstore", 00:05:16.284 "bdev_lvol_create_lvstore", 00:05:16.284 "bdev_raid_set_options", 00:05:16.284 "bdev_raid_remove_base_bdev", 00:05:16.284 "bdev_raid_add_base_bdev", 00:05:16.284 "bdev_raid_delete", 00:05:16.284 "bdev_raid_create", 00:05:16.284 "bdev_raid_get_bdevs", 00:05:16.284 "bdev_error_inject_error", 00:05:16.284 "bdev_error_delete", 00:05:16.284 "bdev_error_create", 00:05:16.284 "bdev_split_delete", 00:05:16.284 "bdev_split_create", 00:05:16.284 "bdev_delay_delete", 00:05:16.284 "bdev_delay_create", 00:05:16.284 "bdev_delay_update_latency", 00:05:16.284 "bdev_zone_block_delete", 00:05:16.284 "bdev_zone_block_create", 00:05:16.284 "blobfs_create", 00:05:16.284 "blobfs_detect", 00:05:16.284 "blobfs_set_cache_size", 00:05:16.284 "bdev_aio_delete", 00:05:16.284 "bdev_aio_rescan", 00:05:16.284 "bdev_aio_create", 00:05:16.284 "bdev_ftl_set_property", 00:05:16.284 "bdev_ftl_get_properties", 00:05:16.284 "bdev_ftl_get_stats", 00:05:16.284 "bdev_ftl_unmap", 00:05:16.284 "bdev_ftl_unload", 00:05:16.284 "bdev_ftl_delete", 00:05:16.284 "bdev_ftl_load", 00:05:16.284 "bdev_ftl_create", 00:05:16.284 "bdev_virtio_attach_controller", 00:05:16.284 "bdev_virtio_scsi_get_devices", 00:05:16.284 "bdev_virtio_detach_controller", 00:05:16.284 "bdev_virtio_blk_set_hotplug", 00:05:16.284 "bdev_iscsi_delete", 00:05:16.284 "bdev_iscsi_create", 00:05:16.284 "bdev_iscsi_set_options", 00:05:16.284 "bdev_uring_delete", 00:05:16.284 "bdev_uring_rescan", 00:05:16.284 "bdev_uring_create", 00:05:16.284 "accel_error_inject_error", 00:05:16.284 "ioat_scan_accel_module", 00:05:16.284 "dsa_scan_accel_module", 00:05:16.284 "iaa_scan_accel_module", 00:05:16.284 "keyring_file_remove_key", 00:05:16.284 "keyring_file_add_key", 00:05:16.284 "keyring_linux_set_options", 00:05:16.284 "fsdev_aio_delete", 00:05:16.284 "fsdev_aio_create", 00:05:16.284 "iscsi_get_histogram", 00:05:16.284 "iscsi_enable_histogram", 00:05:16.284 "iscsi_set_options", 00:05:16.284 "iscsi_get_auth_groups", 00:05:16.284 "iscsi_auth_group_remove_secret", 00:05:16.284 "iscsi_auth_group_add_secret", 00:05:16.284 "iscsi_delete_auth_group", 00:05:16.284 "iscsi_create_auth_group", 00:05:16.284 "iscsi_set_discovery_auth", 00:05:16.284 "iscsi_get_options", 00:05:16.284 "iscsi_target_node_request_logout", 00:05:16.284 "iscsi_target_node_set_redirect", 00:05:16.284 "iscsi_target_node_set_auth", 00:05:16.284 "iscsi_target_node_add_lun", 00:05:16.284 "iscsi_get_stats", 00:05:16.284 "iscsi_get_connections", 00:05:16.284 "iscsi_portal_group_set_auth", 00:05:16.284 "iscsi_start_portal_group", 00:05:16.284 "iscsi_delete_portal_group", 00:05:16.284 "iscsi_create_portal_group", 00:05:16.284 "iscsi_get_portal_groups", 00:05:16.284 "iscsi_delete_target_node", 00:05:16.284 "iscsi_target_node_remove_pg_ig_maps", 00:05:16.284 "iscsi_target_node_add_pg_ig_maps", 00:05:16.284 "iscsi_create_target_node", 00:05:16.284 "iscsi_get_target_nodes", 00:05:16.284 "iscsi_delete_initiator_group", 00:05:16.284 "iscsi_initiator_group_remove_initiators", 00:05:16.284 "iscsi_initiator_group_add_initiators", 00:05:16.284 "iscsi_create_initiator_group", 00:05:16.284 "iscsi_get_initiator_groups", 00:05:16.284 "nvmf_set_crdt", 00:05:16.284 "nvmf_set_config", 00:05:16.284 "nvmf_set_max_subsystems", 00:05:16.284 "nvmf_stop_mdns_prr", 00:05:16.284 "nvmf_publish_mdns_prr", 00:05:16.284 "nvmf_subsystem_get_listeners", 00:05:16.284 "nvmf_subsystem_get_qpairs", 00:05:16.284 "nvmf_subsystem_get_controllers", 00:05:16.284 "nvmf_get_stats", 00:05:16.284 "nvmf_get_transports", 00:05:16.284 "nvmf_create_transport", 00:05:16.284 "nvmf_get_targets", 00:05:16.284 "nvmf_delete_target", 00:05:16.284 "nvmf_create_target", 00:05:16.284 "nvmf_subsystem_allow_any_host", 00:05:16.284 "nvmf_subsystem_set_keys", 00:05:16.284 "nvmf_discovery_referral_remove_host", 00:05:16.284 "nvmf_discovery_referral_add_host", 00:05:16.284 "nvmf_subsystem_remove_host", 00:05:16.284 "nvmf_subsystem_add_host", 00:05:16.284 "nvmf_ns_remove_host", 00:05:16.284 "nvmf_ns_add_host", 00:05:16.284 "nvmf_subsystem_remove_ns", 00:05:16.284 "nvmf_subsystem_set_ns_ana_group", 00:05:16.284 "nvmf_subsystem_add_ns", 00:05:16.284 "nvmf_subsystem_listener_set_ana_state", 00:05:16.284 "nvmf_discovery_get_referrals", 00:05:16.284 "nvmf_discovery_remove_referral", 00:05:16.284 "nvmf_discovery_add_referral", 00:05:16.284 "nvmf_subsystem_remove_listener", 00:05:16.284 "nvmf_subsystem_add_listener", 00:05:16.284 "nvmf_delete_subsystem", 00:05:16.284 "nvmf_create_subsystem", 00:05:16.284 "nvmf_get_subsystems", 00:05:16.284 "env_dpdk_get_mem_stats", 00:05:16.284 "nbd_get_disks", 00:05:16.284 "nbd_stop_disk", 00:05:16.284 "nbd_start_disk", 00:05:16.284 "ublk_recover_disk", 00:05:16.284 "ublk_get_disks", 00:05:16.284 "ublk_stop_disk", 00:05:16.284 "ublk_start_disk", 00:05:16.284 "ublk_destroy_target", 00:05:16.284 "ublk_create_target", 00:05:16.284 "virtio_blk_create_transport", 00:05:16.284 "virtio_blk_get_transports", 00:05:16.284 "vhost_controller_set_coalescing", 00:05:16.284 "vhost_get_controllers", 00:05:16.284 "vhost_delete_controller", 00:05:16.284 "vhost_create_blk_controller", 00:05:16.284 "vhost_scsi_controller_remove_target", 00:05:16.284 "vhost_scsi_controller_add_target", 00:05:16.284 "vhost_start_scsi_controller", 00:05:16.284 "vhost_create_scsi_controller", 00:05:16.284 "thread_set_cpumask", 00:05:16.284 "scheduler_set_options", 00:05:16.284 "framework_get_governor", 00:05:16.284 "framework_get_scheduler", 00:05:16.284 "framework_set_scheduler", 00:05:16.284 "framework_get_reactors", 00:05:16.284 "thread_get_io_channels", 00:05:16.284 "thread_get_pollers", 00:05:16.284 "thread_get_stats", 00:05:16.284 "framework_monitor_context_switch", 00:05:16.284 "spdk_kill_instance", 00:05:16.284 "log_enable_timestamps", 00:05:16.284 "log_get_flags", 00:05:16.284 "log_clear_flag", 00:05:16.284 "log_set_flag", 00:05:16.284 "log_get_level", 00:05:16.284 "log_set_level", 00:05:16.284 "log_get_print_level", 00:05:16.284 "log_set_print_level", 00:05:16.284 "framework_enable_cpumask_locks", 00:05:16.284 "framework_disable_cpumask_locks", 00:05:16.284 "framework_wait_init", 00:05:16.284 "framework_start_init", 00:05:16.284 "scsi_get_devices", 00:05:16.284 "bdev_get_histogram", 00:05:16.284 "bdev_enable_histogram", 00:05:16.284 "bdev_set_qos_limit", 00:05:16.284 "bdev_set_qd_sampling_period", 00:05:16.284 "bdev_get_bdevs", 00:05:16.284 "bdev_reset_iostat", 00:05:16.284 "bdev_get_iostat", 00:05:16.284 "bdev_examine", 00:05:16.284 "bdev_wait_for_examine", 00:05:16.284 "bdev_set_options", 00:05:16.284 "accel_get_stats", 00:05:16.284 "accel_set_options", 00:05:16.284 "accel_set_driver", 00:05:16.284 "accel_crypto_key_destroy", 00:05:16.284 "accel_crypto_keys_get", 00:05:16.284 "accel_crypto_key_create", 00:05:16.284 "accel_assign_opc", 00:05:16.284 "accel_get_module_info", 00:05:16.284 "accel_get_opc_assignments", 00:05:16.284 "vmd_rescan", 00:05:16.284 "vmd_remove_device", 00:05:16.284 "vmd_enable", 00:05:16.284 "sock_get_default_impl", 00:05:16.284 "sock_set_default_impl", 00:05:16.284 "sock_impl_set_options", 00:05:16.284 "sock_impl_get_options", 00:05:16.284 "iobuf_get_stats", 00:05:16.284 "iobuf_set_options", 00:05:16.284 "keyring_get_keys", 00:05:16.284 "framework_get_pci_devices", 00:05:16.284 "framework_get_config", 00:05:16.284 "framework_get_subsystems", 00:05:16.284 "fsdev_set_opts", 00:05:16.284 "fsdev_get_opts", 00:05:16.284 "trace_get_info", 00:05:16.284 "trace_get_tpoint_group_mask", 00:05:16.284 "trace_disable_tpoint_group", 00:05:16.284 "trace_enable_tpoint_group", 00:05:16.284 "trace_clear_tpoint_mask", 00:05:16.284 "trace_set_tpoint_mask", 00:05:16.284 "notify_get_notifications", 00:05:16.284 "notify_get_types", 00:05:16.284 "spdk_get_version", 00:05:16.284 "rpc_get_methods" 00:05:16.284 ] 00:05:16.284 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:16.284 10:27:04 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.284 10:27:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.284 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:16.284 10:27:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57758 00:05:16.284 10:27:04 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57758 ']' 00:05:16.284 10:27:04 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57758 00:05:16.284 10:27:04 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:16.284 10:27:04 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.285 10:27:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57758 00:05:16.285 killing process with pid 57758 00:05:16.285 10:27:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.285 10:27:04 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.285 10:27:04 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57758' 00:05:16.285 10:27:04 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57758 00:05:16.285 10:27:04 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57758 00:05:16.544 ************************************ 00:05:16.544 END TEST spdkcli_tcp 00:05:16.544 ************************************ 00:05:16.544 00:05:16.544 real 0m1.119s 00:05:16.544 user 0m1.911s 00:05:16.544 sys 0m0.319s 00:05:16.544 10:27:05 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.544 10:27:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.544 10:27:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.544 10:27:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.544 10:27:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.544 10:27:05 -- common/autotest_common.sh@10 -- # set +x 00:05:16.544 ************************************ 00:05:16.544 START TEST dpdk_mem_utility 00:05:16.545 ************************************ 00:05:16.545 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.545 * Looking for test storage... 00:05:16.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:16.545 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.545 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.545 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.804 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.804 10:27:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:16.804 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.804 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.804 --rc genhtml_branch_coverage=1 00:05:16.804 --rc genhtml_function_coverage=1 00:05:16.804 --rc genhtml_legend=1 00:05:16.804 --rc geninfo_all_blocks=1 00:05:16.804 --rc geninfo_unexecuted_blocks=1 00:05:16.804 00:05:16.804 ' 00:05:16.804 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.804 --rc genhtml_branch_coverage=1 00:05:16.804 --rc genhtml_function_coverage=1 00:05:16.804 --rc genhtml_legend=1 00:05:16.804 --rc geninfo_all_blocks=1 00:05:16.804 --rc geninfo_unexecuted_blocks=1 00:05:16.804 00:05:16.804 ' 00:05:16.804 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.804 --rc genhtml_branch_coverage=1 00:05:16.804 --rc genhtml_function_coverage=1 00:05:16.804 --rc genhtml_legend=1 00:05:16.804 --rc geninfo_all_blocks=1 00:05:16.804 --rc geninfo_unexecuted_blocks=1 00:05:16.804 00:05:16.804 ' 00:05:16.804 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.804 --rc genhtml_branch_coverage=1 00:05:16.804 --rc genhtml_function_coverage=1 00:05:16.804 --rc genhtml_legend=1 00:05:16.804 --rc geninfo_all_blocks=1 00:05:16.804 --rc geninfo_unexecuted_blocks=1 00:05:16.804 00:05:16.804 ' 00:05:16.804 10:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:16.804 10:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57844 00:05:16.804 10:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.804 10:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57844 00:05:16.804 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57844 ']' 00:05:16.804 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.804 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.804 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.804 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.804 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.804 [2024-11-12 10:27:05.438753] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:16.804 [2024-11-12 10:27:05.439026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57844 ] 00:05:17.064 [2024-11-12 10:27:05.585046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.064 [2024-11-12 10:27:05.613971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.064 [2024-11-12 10:27:05.651139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:17.064 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.064 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:17.064 10:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:17.064 10:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:17.064 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.064 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.064 { 00:05:17.064 "filename": "/tmp/spdk_mem_dump.txt" 00:05:17.064 } 00:05:17.064 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.064 10:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:17.324 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:17.324 1 heaps totaling size 810.000000 MiB 00:05:17.324 size: 810.000000 MiB heap id: 0 00:05:17.324 end heaps---------- 00:05:17.324 9 mempools totaling size 595.772034 MiB 00:05:17.324 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:17.324 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:17.324 size: 92.545471 MiB name: bdev_io_57844 00:05:17.324 size: 50.003479 MiB name: msgpool_57844 00:05:17.324 size: 36.509338 MiB name: fsdev_io_57844 00:05:17.324 size: 21.763794 MiB name: PDU_Pool 00:05:17.324 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:17.324 size: 4.133484 MiB name: evtpool_57844 00:05:17.324 size: 0.026123 MiB name: Session_Pool 00:05:17.324 end mempools------- 00:05:17.324 6 memzones totaling size 4.142822 MiB 00:05:17.324 size: 1.000366 MiB name: RG_ring_0_57844 00:05:17.324 size: 1.000366 MiB name: RG_ring_1_57844 00:05:17.324 size: 1.000366 MiB name: RG_ring_4_57844 00:05:17.324 size: 1.000366 MiB name: RG_ring_5_57844 00:05:17.324 size: 0.125366 MiB name: RG_ring_2_57844 00:05:17.324 size: 0.015991 MiB name: RG_ring_3_57844 00:05:17.324 end memzones------- 00:05:17.324 10:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:17.324 heap id: 0 total size: 810.000000 MiB number of busy elements: 314 number of free elements: 15 00:05:17.324 list of free elements. size: 10.813049 MiB 00:05:17.324 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:17.324 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:17.324 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:17.324 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:17.324 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:17.324 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:17.324 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:17.324 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:17.324 element at address: 0x20001a600000 with size: 0.567505 MiB 00:05:17.324 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:17.324 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:17.324 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:17.324 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:17.324 element at address: 0x200027a00000 with size: 0.395752 MiB 00:05:17.324 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:17.324 list of standard malloc elements. size: 199.268066 MiB 00:05:17.324 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:17.324 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:17.324 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:17.324 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:17.324 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:17.324 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:17.324 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:17.324 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:17.324 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:17.324 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:17.324 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:17.325 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:17.325 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:17.325 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:17.325 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:17.325 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:17.325 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:17.325 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:17.325 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691480 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691540 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691600 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:17.325 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:17.326 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a65500 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:17.326 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:17.327 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:17.327 list of memzone associated elements. size: 599.918884 MiB 00:05:17.327 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:17.327 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:17.327 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:17.327 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:17.327 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:17.327 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57844_0 00:05:17.327 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:17.327 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57844_0 00:05:17.327 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:17.327 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57844_0 00:05:17.327 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:17.327 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:17.327 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:17.327 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:17.327 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:17.327 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57844_0 00:05:17.327 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:17.327 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57844 00:05:17.327 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:17.327 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57844 00:05:17.327 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:17.327 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:17.327 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:17.327 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:17.327 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:17.327 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:17.327 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:17.327 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:17.327 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:17.327 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57844 00:05:17.327 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:17.327 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57844 00:05:17.327 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:17.327 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57844 00:05:17.327 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:17.327 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57844 00:05:17.327 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:17.327 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57844 00:05:17.327 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:17.327 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57844 00:05:17.327 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:17.327 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:17.327 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:17.327 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:17.327 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:17.327 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:17.327 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:17.327 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57844 00:05:17.327 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:17.327 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57844 00:05:17.327 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:17.327 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:17.327 element at address: 0x200027a65680 with size: 0.023743 MiB 00:05:17.327 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:17.327 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:17.327 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57844 00:05:17.327 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:05:17.327 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:17.327 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:17.327 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57844 00:05:17.327 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:17.327 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57844 00:05:17.327 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:17.327 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57844 00:05:17.327 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:05:17.327 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:17.327 10:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:17.327 10:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57844 00:05:17.327 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57844 ']' 00:05:17.327 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57844 00:05:17.327 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:17.327 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:17.327 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57844 00:05:17.327 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:17.327 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:17.327 killing process with pid 57844 00:05:17.327 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57844' 00:05:17.327 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57844 00:05:17.327 10:27:05 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57844 00:05:17.587 ************************************ 00:05:17.587 END TEST dpdk_mem_utility 00:05:17.587 ************************************ 00:05:17.587 00:05:17.587 real 0m0.979s 00:05:17.587 user 0m1.020s 00:05:17.587 sys 0m0.297s 00:05:17.587 10:27:06 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.587 10:27:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.587 10:27:06 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:17.587 10:27:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.587 10:27:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.587 10:27:06 -- common/autotest_common.sh@10 -- # set +x 00:05:17.587 ************************************ 00:05:17.587 START TEST event 00:05:17.587 ************************************ 00:05:17.587 10:27:06 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:17.587 * Looking for test storage... 00:05:17.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:17.587 10:27:06 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:17.587 10:27:06 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:17.587 10:27:06 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:17.846 10:27:06 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:17.846 10:27:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.846 10:27:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.846 10:27:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.846 10:27:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.846 10:27:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.846 10:27:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.846 10:27:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.846 10:27:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.846 10:27:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.846 10:27:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.846 10:27:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.846 10:27:06 event -- scripts/common.sh@344 -- # case "$op" in 00:05:17.846 10:27:06 event -- scripts/common.sh@345 -- # : 1 00:05:17.846 10:27:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.846 10:27:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.846 10:27:06 event -- scripts/common.sh@365 -- # decimal 1 00:05:17.846 10:27:06 event -- scripts/common.sh@353 -- # local d=1 00:05:17.846 10:27:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.846 10:27:06 event -- scripts/common.sh@355 -- # echo 1 00:05:17.846 10:27:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.846 10:27:06 event -- scripts/common.sh@366 -- # decimal 2 00:05:17.846 10:27:06 event -- scripts/common.sh@353 -- # local d=2 00:05:17.846 10:27:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.846 10:27:06 event -- scripts/common.sh@355 -- # echo 2 00:05:17.846 10:27:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.846 10:27:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.846 10:27:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.846 10:27:06 event -- scripts/common.sh@368 -- # return 0 00:05:17.846 10:27:06 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.846 10:27:06 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:17.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.846 --rc genhtml_branch_coverage=1 00:05:17.846 --rc genhtml_function_coverage=1 00:05:17.846 --rc genhtml_legend=1 00:05:17.846 --rc geninfo_all_blocks=1 00:05:17.846 --rc geninfo_unexecuted_blocks=1 00:05:17.846 00:05:17.847 ' 00:05:17.847 10:27:06 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:17.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.847 --rc genhtml_branch_coverage=1 00:05:17.847 --rc genhtml_function_coverage=1 00:05:17.847 --rc genhtml_legend=1 00:05:17.847 --rc geninfo_all_blocks=1 00:05:17.847 --rc geninfo_unexecuted_blocks=1 00:05:17.847 00:05:17.847 ' 00:05:17.847 10:27:06 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:17.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.847 --rc genhtml_branch_coverage=1 00:05:17.847 --rc genhtml_function_coverage=1 00:05:17.847 --rc genhtml_legend=1 00:05:17.847 --rc geninfo_all_blocks=1 00:05:17.847 --rc geninfo_unexecuted_blocks=1 00:05:17.847 00:05:17.847 ' 00:05:17.847 10:27:06 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:17.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.847 --rc genhtml_branch_coverage=1 00:05:17.847 --rc genhtml_function_coverage=1 00:05:17.847 --rc genhtml_legend=1 00:05:17.847 --rc geninfo_all_blocks=1 00:05:17.847 --rc geninfo_unexecuted_blocks=1 00:05:17.847 00:05:17.847 ' 00:05:17.847 10:27:06 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:17.847 10:27:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:17.847 10:27:06 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.847 10:27:06 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:17.847 10:27:06 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.847 10:27:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.847 ************************************ 00:05:17.847 START TEST event_perf 00:05:17.847 ************************************ 00:05:17.847 10:27:06 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.847 Running I/O for 1 seconds...[2024-11-12 10:27:06.417832] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:17.847 [2024-11-12 10:27:06.418432] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57916 ] 00:05:17.847 [2024-11-12 10:27:06.563001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.847 [2024-11-12 10:27:06.593502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.847 [2024-11-12 10:27:06.593638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.847 [2024-11-12 10:27:06.593762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.847 [2024-11-12 10:27:06.593763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.225 Running I/O for 1 seconds... 00:05:19.225 lcore 0: 203540 00:05:19.225 lcore 1: 203539 00:05:19.225 lcore 2: 203539 00:05:19.225 lcore 3: 203539 00:05:19.225 done. 00:05:19.225 00:05:19.225 real 0m1.237s 00:05:19.225 user 0m4.072s 00:05:19.225 sys 0m0.045s 00:05:19.225 10:27:07 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.225 10:27:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.225 ************************************ 00:05:19.225 END TEST event_perf 00:05:19.225 ************************************ 00:05:19.225 10:27:07 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:19.225 10:27:07 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:19.225 10:27:07 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.225 10:27:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.225 ************************************ 00:05:19.225 START TEST event_reactor 00:05:19.225 ************************************ 00:05:19.225 10:27:07 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:19.225 [2024-11-12 10:27:07.695416] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:19.225 [2024-11-12 10:27:07.695497] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57954 ] 00:05:19.225 [2024-11-12 10:27:07.836107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.225 [2024-11-12 10:27:07.863866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.163 test_start 00:05:20.163 oneshot 00:05:20.163 tick 100 00:05:20.163 tick 100 00:05:20.163 tick 250 00:05:20.163 tick 100 00:05:20.163 tick 100 00:05:20.163 tick 100 00:05:20.163 tick 250 00:05:20.163 tick 500 00:05:20.163 tick 100 00:05:20.163 tick 100 00:05:20.163 tick 250 00:05:20.163 tick 100 00:05:20.163 tick 100 00:05:20.163 test_end 00:05:20.163 00:05:20.163 real 0m1.221s 00:05:20.163 user 0m1.087s 00:05:20.163 sys 0m0.029s 00:05:20.163 10:27:08 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.163 ************************************ 00:05:20.163 END TEST event_reactor 00:05:20.163 ************************************ 00:05:20.163 10:27:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:20.422 10:27:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.422 10:27:08 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:20.422 10:27:08 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.422 10:27:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.422 ************************************ 00:05:20.422 START TEST event_reactor_perf 00:05:20.422 ************************************ 00:05:20.422 10:27:08 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.422 [2024-11-12 10:27:08.971227] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:20.422 [2024-11-12 10:27:08.971316] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57990 ] 00:05:20.422 [2024-11-12 10:27:09.116451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.422 [2024-11-12 10:27:09.143397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.800 test_start 00:05:21.800 test_end 00:05:21.800 Performance: 446540 events per second 00:05:21.800 00:05:21.800 real 0m1.225s 00:05:21.800 user 0m1.086s 00:05:21.800 sys 0m0.034s 00:05:21.800 10:27:10 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.800 ************************************ 00:05:21.800 END TEST event_reactor_perf 00:05:21.800 ************************************ 00:05:21.800 10:27:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.800 10:27:10 event -- event/event.sh@49 -- # uname -s 00:05:21.800 10:27:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:21.800 10:27:10 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:21.800 10:27:10 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:21.800 10:27:10 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.800 10:27:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.800 ************************************ 00:05:21.800 START TEST event_scheduler 00:05:21.800 ************************************ 00:05:21.800 10:27:10 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:21.800 * Looking for test storage... 00:05:21.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:21.800 10:27:10 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:21.800 10:27:10 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:21.800 10:27:10 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:21.800 10:27:10 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:21.800 10:27:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.801 10:27:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.801 10:27:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.801 10:27:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:21.801 10:27:10 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.801 10:27:10 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:21.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.801 --rc genhtml_branch_coverage=1 00:05:21.801 --rc genhtml_function_coverage=1 00:05:21.801 --rc genhtml_legend=1 00:05:21.801 --rc geninfo_all_blocks=1 00:05:21.801 --rc geninfo_unexecuted_blocks=1 00:05:21.801 00:05:21.801 ' 00:05:21.801 10:27:10 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:21.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.801 --rc genhtml_branch_coverage=1 00:05:21.801 --rc genhtml_function_coverage=1 00:05:21.801 --rc genhtml_legend=1 00:05:21.801 --rc geninfo_all_blocks=1 00:05:21.801 --rc geninfo_unexecuted_blocks=1 00:05:21.801 00:05:21.801 ' 00:05:21.801 10:27:10 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:21.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.801 --rc genhtml_branch_coverage=1 00:05:21.801 --rc genhtml_function_coverage=1 00:05:21.801 --rc genhtml_legend=1 00:05:21.801 --rc geninfo_all_blocks=1 00:05:21.801 --rc geninfo_unexecuted_blocks=1 00:05:21.801 00:05:21.801 ' 00:05:21.801 10:27:10 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:21.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.801 --rc genhtml_branch_coverage=1 00:05:21.801 --rc genhtml_function_coverage=1 00:05:21.801 --rc genhtml_legend=1 00:05:21.801 --rc geninfo_all_blocks=1 00:05:21.801 --rc geninfo_unexecuted_blocks=1 00:05:21.801 00:05:21.801 ' 00:05:21.801 10:27:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:21.801 10:27:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58054 00:05:21.801 10:27:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:21.801 10:27:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.801 10:27:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58054 00:05:21.801 10:27:10 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58054 ']' 00:05:21.801 10:27:10 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.801 10:27:10 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:21.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.801 10:27:10 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.801 10:27:10 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:21.801 10:27:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.801 [2024-11-12 10:27:10.438112] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:21.801 [2024-11-12 10:27:10.438235] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58054 ] 00:05:22.061 [2024-11-12 10:27:10.583950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.061 [2024-11-12 10:27:10.623168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.061 [2024-11-12 10:27:10.623279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.061 [2024-11-12 10:27:10.623417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.061 [2024-11-12 10:27:10.623436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.061 10:27:10 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:22.061 10:27:10 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:22.061 10:27:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:22.061 10:27:10 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.061 10:27:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.061 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.061 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.061 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.061 POWER: Cannot set governor of lcore 0 to performance 00:05:22.061 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.061 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.061 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.061 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.061 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:22.061 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:22.061 POWER: Unable to set Power Management Environment for lcore 0 00:05:22.061 [2024-11-12 10:27:10.705266] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:22.061 [2024-11-12 10:27:10.705278] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:22.061 [2024-11-12 10:27:10.705302] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:22.061 [2024-11-12 10:27:10.705567] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:22.061 [2024-11-12 10:27:10.705578] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:22.061 [2024-11-12 10:27:10.705585] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:22.061 10:27:10 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.061 10:27:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:22.061 10:27:10 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.061 10:27:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.061 [2024-11-12 10:27:10.739164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.061 [2024-11-12 10:27:10.755676] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:22.061 10:27:10 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.061 10:27:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:22.061 10:27:10 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:22.061 10:27:10 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.061 10:27:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.061 ************************************ 00:05:22.061 START TEST scheduler_create_thread 00:05:22.061 ************************************ 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.061 2 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.061 3 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.061 4 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.061 5 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.061 6 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.061 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.320 7 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.320 8 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.320 9 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.320 10 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.320 10:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.888 10:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.888 10:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:22.888 10:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:22.888 10:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.888 10:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.827 10:27:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.827 00:05:23.827 real 0m1.751s 00:05:23.827 user 0m0.018s 00:05:23.827 sys 0m0.005s 00:05:23.827 10:27:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.827 10:27:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.827 ************************************ 00:05:23.827 END TEST scheduler_create_thread 00:05:23.827 ************************************ 00:05:23.827 10:27:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:23.827 10:27:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58054 00:05:23.827 10:27:12 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58054 ']' 00:05:23.827 10:27:12 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58054 00:05:23.827 10:27:12 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:23.827 10:27:12 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:23.827 10:27:12 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58054 00:05:24.087 10:27:12 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:24.087 10:27:12 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:24.087 killing process with pid 58054 00:05:24.087 10:27:12 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58054' 00:05:24.087 10:27:12 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58054 00:05:24.087 10:27:12 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58054 00:05:24.346 [2024-11-12 10:27:12.998004] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:24.605 00:05:24.605 real 0m2.887s 00:05:24.605 user 0m3.662s 00:05:24.605 sys 0m0.316s 00:05:24.605 10:27:13 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:24.605 10:27:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.605 ************************************ 00:05:24.605 END TEST event_scheduler 00:05:24.605 ************************************ 00:05:24.605 10:27:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:24.605 10:27:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:24.605 10:27:13 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:24.605 10:27:13 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:24.605 10:27:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.605 ************************************ 00:05:24.605 START TEST app_repeat 00:05:24.605 ************************************ 00:05:24.606 10:27:13 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58135 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.606 Process app_repeat pid: 58135 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58135' 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.606 spdk_app_start Round 0 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:24.606 10:27:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58135 /var/tmp/spdk-nbd.sock 00:05:24.606 10:27:13 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58135 ']' 00:05:24.606 10:27:13 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.606 10:27:13 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:24.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.606 10:27:13 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.606 10:27:13 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:24.606 10:27:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.606 [2024-11-12 10:27:13.207064] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:24.606 [2024-11-12 10:27:13.207164] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58135 ] 00:05:24.606 [2024-11-12 10:27:13.353479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.865 [2024-11-12 10:27:13.386319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.865 [2024-11-12 10:27:13.386334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.865 [2024-11-12 10:27:13.414298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.865 10:27:13 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.865 10:27:13 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:24.865 10:27:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.124 Malloc0 00:05:25.124 10:27:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.383 Malloc1 00:05:25.383 10:27:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.383 10:27:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.383 10:27:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.383 10:27:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.383 10:27:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.383 10:27:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.383 10:27:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.383 10:27:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.383 10:27:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.383 10:27:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.384 10:27:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.384 10:27:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.384 10:27:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.384 10:27:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.384 10:27:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.384 10:27:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.643 /dev/nbd0 00:05:25.643 10:27:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.643 10:27:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.643 1+0 records in 00:05:25.643 1+0 records out 00:05:25.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337884 s, 12.1 MB/s 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:25.643 10:27:14 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:25.643 10:27:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.643 10:27:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.643 10:27:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.903 /dev/nbd1 00:05:25.903 10:27:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.903 10:27:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.903 10:27:14 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:25.903 10:27:14 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:25.903 10:27:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:25.903 10:27:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:25.903 10:27:14 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:25.903 10:27:14 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:25.903 10:27:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:25.904 10:27:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:25.904 10:27:14 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.904 1+0 records in 00:05:25.904 1+0 records out 00:05:25.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244 s, 16.8 MB/s 00:05:25.904 10:27:14 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.904 10:27:14 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:25.904 10:27:14 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.904 10:27:14 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:25.904 10:27:14 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:25.904 10:27:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.904 10:27:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.904 10:27:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.904 10:27:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.904 10:27:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.473 { 00:05:26.473 "nbd_device": "/dev/nbd0", 00:05:26.473 "bdev_name": "Malloc0" 00:05:26.473 }, 00:05:26.473 { 00:05:26.473 "nbd_device": "/dev/nbd1", 00:05:26.473 "bdev_name": "Malloc1" 00:05:26.473 } 00:05:26.473 ]' 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.473 { 00:05:26.473 "nbd_device": "/dev/nbd0", 00:05:26.473 "bdev_name": "Malloc0" 00:05:26.473 }, 00:05:26.473 { 00:05:26.473 "nbd_device": "/dev/nbd1", 00:05:26.473 "bdev_name": "Malloc1" 00:05:26.473 } 00:05:26.473 ]' 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.473 /dev/nbd1' 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.473 /dev/nbd1' 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.473 256+0 records in 00:05:26.473 256+0 records out 00:05:26.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439034 s, 239 MB/s 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.473 10:27:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.473 256+0 records in 00:05:26.473 256+0 records out 00:05:26.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222632 s, 47.1 MB/s 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.473 256+0 records in 00:05:26.473 256+0 records out 00:05:26.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282936 s, 37.1 MB/s 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.473 10:27:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.733 10:27:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.733 10:27:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.733 10:27:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.733 10:27:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.733 10:27:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.733 10:27:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.733 10:27:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.733 10:27:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.733 10:27:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.733 10:27:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.992 10:27:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.992 10:27:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.992 10:27:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.992 10:27:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.992 10:27:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.992 10:27:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.992 10:27:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.992 10:27:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.992 10:27:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.992 10:27:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.992 10:27:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.250 10:27:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.250 10:27:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.250 10:27:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.250 10:27:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.250 10:27:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.250 10:27:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.250 10:27:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.250 10:27:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.250 10:27:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.250 10:27:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.251 10:27:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.251 10:27:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.251 10:27:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.510 10:27:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.769 [2024-11-12 10:27:16.283653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.769 [2024-11-12 10:27:16.310732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.769 [2024-11-12 10:27:16.310738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.769 [2024-11-12 10:27:16.338425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.769 [2024-11-12 10:27:16.338510] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.769 [2024-11-12 10:27:16.338521] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.060 10:27:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.060 spdk_app_start Round 1 00:05:31.060 10:27:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:31.060 10:27:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58135 /var/tmp/spdk-nbd.sock 00:05:31.060 10:27:19 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58135 ']' 00:05:31.060 10:27:19 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.060 10:27:19 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.060 10:27:19 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.060 10:27:19 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.060 10:27:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.060 10:27:19 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:31.060 10:27:19 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:31.060 10:27:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.060 Malloc0 00:05:31.060 10:27:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.321 Malloc1 00:05:31.321 10:27:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.321 10:27:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.322 10:27:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.581 /dev/nbd0 00:05:31.581 10:27:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.581 10:27:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.581 1+0 records in 00:05:31.581 1+0 records out 00:05:31.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318969 s, 12.8 MB/s 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:31.581 10:27:20 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:31.581 10:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.581 10:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.581 10:27:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.840 /dev/nbd1 00:05:31.840 10:27:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.840 10:27:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.840 1+0 records in 00:05:31.840 1+0 records out 00:05:31.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171602 s, 23.9 MB/s 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:31.840 10:27:20 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:31.840 10:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.840 10:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.840 10:27:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.840 10:27:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.840 10:27:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.408 10:27:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.408 { 00:05:32.408 "nbd_device": "/dev/nbd0", 00:05:32.408 "bdev_name": "Malloc0" 00:05:32.408 }, 00:05:32.408 { 00:05:32.408 "nbd_device": "/dev/nbd1", 00:05:32.408 "bdev_name": "Malloc1" 00:05:32.408 } 00:05:32.408 ]' 00:05:32.408 10:27:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.408 { 00:05:32.408 "nbd_device": "/dev/nbd0", 00:05:32.408 "bdev_name": "Malloc0" 00:05:32.408 }, 00:05:32.408 { 00:05:32.408 "nbd_device": "/dev/nbd1", 00:05:32.408 "bdev_name": "Malloc1" 00:05:32.408 } 00:05:32.408 ]' 00:05:32.408 10:27:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.408 10:27:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.408 /dev/nbd1' 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.409 /dev/nbd1' 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.409 256+0 records in 00:05:32.409 256+0 records out 00:05:32.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00591488 s, 177 MB/s 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.409 256+0 records in 00:05:32.409 256+0 records out 00:05:32.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241225 s, 43.5 MB/s 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.409 10:27:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.409 256+0 records in 00:05:32.409 256+0 records out 00:05:32.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251801 s, 41.6 MB/s 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.409 10:27:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.668 10:27:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.668 10:27:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.668 10:27:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.668 10:27:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.668 10:27:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.668 10:27:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.668 10:27:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.668 10:27:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.668 10:27:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.668 10:27:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.927 10:27:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.927 10:27:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.927 10:27:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.927 10:27:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.927 10:27:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.927 10:27:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.927 10:27:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.927 10:27:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.927 10:27:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.927 10:27:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.927 10:27:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.186 10:27:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.186 10:27:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.186 10:27:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.445 10:27:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.445 10:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.445 10:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.445 10:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.445 10:27:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.445 10:27:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.445 10:27:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.445 10:27:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.445 10:27:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.445 10:27:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.705 10:27:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.705 [2024-11-12 10:27:22.404145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.705 [2024-11-12 10:27:22.431681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.705 [2024-11-12 10:27:22.431691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.705 [2024-11-12 10:27:22.460566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.705 [2024-11-12 10:27:22.460676] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.705 [2024-11-12 10:27:22.460690] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.995 spdk_app_start Round 2 00:05:36.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.995 10:27:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.995 10:27:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:36.995 10:27:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58135 /var/tmp/spdk-nbd.sock 00:05:36.995 10:27:25 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58135 ']' 00:05:36.995 10:27:25 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.995 10:27:25 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:36.995 10:27:25 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.995 10:27:25 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:36.995 10:27:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.995 10:27:25 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:36.995 10:27:25 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:36.995 10:27:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.254 Malloc0 00:05:37.254 10:27:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.513 Malloc1 00:05:37.513 10:27:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.513 10:27:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.772 /dev/nbd0 00:05:37.772 10:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.773 10:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.773 1+0 records in 00:05:37.773 1+0 records out 00:05:37.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311367 s, 13.2 MB/s 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:37.773 10:27:26 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:37.773 10:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.773 10:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.773 10:27:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.032 /dev/nbd1 00:05:38.032 10:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.032 10:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.032 1+0 records in 00:05:38.032 1+0 records out 00:05:38.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268329 s, 15.3 MB/s 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:38.032 10:27:26 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:38.032 10:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.032 10:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.032 10:27:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.032 10:27:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.032 10:27:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.291 10:27:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.291 { 00:05:38.291 "nbd_device": "/dev/nbd0", 00:05:38.291 "bdev_name": "Malloc0" 00:05:38.291 }, 00:05:38.291 { 00:05:38.291 "nbd_device": "/dev/nbd1", 00:05:38.291 "bdev_name": "Malloc1" 00:05:38.291 } 00:05:38.291 ]' 00:05:38.291 10:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.291 { 00:05:38.291 "nbd_device": "/dev/nbd0", 00:05:38.291 "bdev_name": "Malloc0" 00:05:38.291 }, 00:05:38.291 { 00:05:38.291 "nbd_device": "/dev/nbd1", 00:05:38.291 "bdev_name": "Malloc1" 00:05:38.291 } 00:05:38.291 ]' 00:05:38.291 10:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.551 /dev/nbd1' 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.551 /dev/nbd1' 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.551 256+0 records in 00:05:38.551 256+0 records out 00:05:38.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107574 s, 97.5 MB/s 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.551 256+0 records in 00:05:38.551 256+0 records out 00:05:38.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206921 s, 50.7 MB/s 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.551 256+0 records in 00:05:38.551 256+0 records out 00:05:38.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244517 s, 42.9 MB/s 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.551 10:27:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.810 10:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.810 10:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.810 10:27:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.810 10:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.810 10:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.810 10:27:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.810 10:27:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.810 10:27:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.810 10:27:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.810 10:27:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.069 10:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.328 10:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.328 10:27:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.328 10:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.328 10:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.328 10:27:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.328 10:27:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.328 10:27:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.328 10:27:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.328 10:27:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.328 10:27:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.587 10:27:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.587 10:27:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.587 10:27:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.587 10:27:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.587 10:27:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.587 10:27:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.587 10:27:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.587 10:27:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.587 10:27:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.587 10:27:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.587 10:27:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.587 10:27:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.587 10:27:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.845 10:27:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.845 [2024-11-12 10:27:28.586957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.104 [2024-11-12 10:27:28.618621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.104 [2024-11-12 10:27:28.618632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.104 [2024-11-12 10:27:28.651102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.104 [2024-11-12 10:27:28.651218] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.105 [2024-11-12 10:27:28.651234] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.786 10:27:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58135 /var/tmp/spdk-nbd.sock 00:05:42.786 10:27:31 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58135 ']' 00:05:42.786 10:27:31 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.786 10:27:31 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:42.786 10:27:31 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.786 10:27:31 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:42.786 10:27:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:43.354 10:27:31 event.app_repeat -- event/event.sh@39 -- # killprocess 58135 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58135 ']' 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58135 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58135 00:05:43.354 killing process with pid 58135 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58135' 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58135 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58135 00:05:43.354 spdk_app_start is called in Round 0. 00:05:43.354 Shutdown signal received, stop current app iteration 00:05:43.354 Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 reinitialization... 00:05:43.354 spdk_app_start is called in Round 1. 00:05:43.354 Shutdown signal received, stop current app iteration 00:05:43.354 Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 reinitialization... 00:05:43.354 spdk_app_start is called in Round 2. 00:05:43.354 Shutdown signal received, stop current app iteration 00:05:43.354 Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 reinitialization... 00:05:43.354 spdk_app_start is called in Round 3. 00:05:43.354 Shutdown signal received, stop current app iteration 00:05:43.354 10:27:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:43.354 10:27:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:43.354 00:05:43.354 real 0m18.796s 00:05:43.354 user 0m43.365s 00:05:43.354 sys 0m2.583s 00:05:43.354 ************************************ 00:05:43.354 END TEST app_repeat 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.354 10:27:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.354 ************************************ 00:05:43.354 10:27:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:43.354 10:27:32 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:43.354 10:27:32 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:43.354 10:27:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.354 10:27:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.354 ************************************ 00:05:43.354 START TEST cpu_locks 00:05:43.354 ************************************ 00:05:43.354 10:27:32 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:43.354 * Looking for test storage... 00:05:43.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:43.354 10:27:32 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:43.354 10:27:32 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:43.354 10:27:32 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:43.614 10:27:32 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.614 10:27:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:43.614 10:27:32 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.614 10:27:32 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:43.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.614 --rc genhtml_branch_coverage=1 00:05:43.614 --rc genhtml_function_coverage=1 00:05:43.614 --rc genhtml_legend=1 00:05:43.614 --rc geninfo_all_blocks=1 00:05:43.614 --rc geninfo_unexecuted_blocks=1 00:05:43.614 00:05:43.614 ' 00:05:43.614 10:27:32 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:43.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.614 --rc genhtml_branch_coverage=1 00:05:43.614 --rc genhtml_function_coverage=1 00:05:43.614 --rc genhtml_legend=1 00:05:43.614 --rc geninfo_all_blocks=1 00:05:43.614 --rc geninfo_unexecuted_blocks=1 00:05:43.614 00:05:43.614 ' 00:05:43.614 10:27:32 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:43.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.614 --rc genhtml_branch_coverage=1 00:05:43.614 --rc genhtml_function_coverage=1 00:05:43.614 --rc genhtml_legend=1 00:05:43.614 --rc geninfo_all_blocks=1 00:05:43.614 --rc geninfo_unexecuted_blocks=1 00:05:43.614 00:05:43.614 ' 00:05:43.615 10:27:32 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:43.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.615 --rc genhtml_branch_coverage=1 00:05:43.615 --rc genhtml_function_coverage=1 00:05:43.615 --rc genhtml_legend=1 00:05:43.615 --rc geninfo_all_blocks=1 00:05:43.615 --rc geninfo_unexecuted_blocks=1 00:05:43.615 00:05:43.615 ' 00:05:43.615 10:27:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:43.615 10:27:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:43.615 10:27:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:43.615 10:27:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:43.615 10:27:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:43.615 10:27:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.615 10:27:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.615 ************************************ 00:05:43.615 START TEST default_locks 00:05:43.615 ************************************ 00:05:43.615 10:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:43.615 10:27:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58568 00:05:43.615 10:27:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58568 00:05:43.615 10:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58568 ']' 00:05:43.615 10:27:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.615 10:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.615 10:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:43.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.615 10:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.615 10:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:43.615 10:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.615 [2024-11-12 10:27:32.280007] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:43.615 [2024-11-12 10:27:32.280134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58568 ] 00:05:43.874 [2024-11-12 10:27:32.421742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.874 [2024-11-12 10:27:32.450538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.874 [2024-11-12 10:27:32.488451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.811 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.811 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:44.811 10:27:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58568 00:05:44.811 10:27:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58568 00:05:44.811 10:27:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.070 10:27:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58568 00:05:45.070 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58568 ']' 00:05:45.070 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58568 00:05:45.070 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:45.070 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:45.070 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58568 00:05:45.070 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:45.070 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:45.070 killing process with pid 58568 00:05:45.070 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58568' 00:05:45.070 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58568 00:05:45.070 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58568 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58568 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58568 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58568 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58568 ']' 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:45.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.329 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58568) - No such process 00:05:45.329 ERROR: process (pid: 58568) is no longer running 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.329 10:27:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.330 10:27:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.330 00:05:45.330 real 0m1.705s 00:05:45.330 user 0m1.951s 00:05:45.330 sys 0m0.428s 00:05:45.330 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.330 ************************************ 00:05:45.330 END TEST default_locks 00:05:45.330 ************************************ 00:05:45.330 10:27:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.330 10:27:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:45.330 10:27:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.330 10:27:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.330 10:27:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.330 ************************************ 00:05:45.330 START TEST default_locks_via_rpc 00:05:45.330 ************************************ 00:05:45.330 10:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:45.330 10:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58620 00:05:45.330 10:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.330 10:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58620 00:05:45.330 10:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58620 ']' 00:05:45.330 10:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.330 10:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:45.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.330 10:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.330 10:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:45.330 10:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.330 [2024-11-12 10:27:34.027794] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:45.330 [2024-11-12 10:27:34.027901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58620 ] 00:05:45.589 [2024-11-12 10:27:34.166946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.589 [2024-11-12 10:27:34.195776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.589 [2024-11-12 10:27:34.233031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58620 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58620 00:05:45.848 10:27:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.107 10:27:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58620 00:05:46.107 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58620 ']' 00:05:46.107 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58620 00:05:46.107 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:46.107 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:46.107 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58620 00:05:46.107 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:46.107 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:46.107 killing process with pid 58620 00:05:46.107 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58620' 00:05:46.107 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58620 00:05:46.107 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58620 00:05:46.367 00:05:46.367 real 0m0.984s 00:05:46.367 user 0m1.052s 00:05:46.367 sys 0m0.347s 00:05:46.367 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.367 ************************************ 00:05:46.367 END TEST default_locks_via_rpc 00:05:46.367 ************************************ 00:05:46.367 10:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.367 10:27:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:46.367 10:27:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.367 10:27:34 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.367 10:27:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.367 ************************************ 00:05:46.367 START TEST non_locking_app_on_locked_coremask 00:05:46.367 ************************************ 00:05:46.367 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:46.367 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58658 00:05:46.367 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58658 /var/tmp/spdk.sock 00:05:46.367 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.367 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58658 ']' 00:05:46.367 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.367 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:46.367 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.367 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:46.367 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.367 [2024-11-12 10:27:35.059689] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:46.367 [2024-11-12 10:27:35.059802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58658 ] 00:05:46.626 [2024-11-12 10:27:35.199399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.627 [2024-11-12 10:27:35.229089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.627 [2024-11-12 10:27:35.267530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.886 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.886 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:46.886 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:46.886 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58667 00:05:46.886 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58667 /var/tmp/spdk2.sock 00:05:46.886 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58667 ']' 00:05:46.886 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.886 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:46.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.886 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.886 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:46.886 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.886 [2024-11-12 10:27:35.448844] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:46.886 [2024-11-12 10:27:35.448939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58667 ] 00:05:46.886 [2024-11-12 10:27:35.603084] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.886 [2024-11-12 10:27:35.603159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.145 [2024-11-12 10:27:35.664959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.145 [2024-11-12 10:27:35.742892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.404 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:47.404 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:47.404 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58658 00:05:47.404 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58658 00:05:47.404 10:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.972 10:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58658 00:05:47.972 10:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58658 ']' 00:05:47.972 10:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58658 00:05:47.972 10:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:47.972 10:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:47.972 10:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58658 00:05:47.972 10:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:47.972 killing process with pid 58658 00:05:47.972 10:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:47.972 10:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58658' 00:05:47.972 10:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58658 00:05:47.972 10:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58658 00:05:48.540 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58667 00:05:48.540 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58667 ']' 00:05:48.540 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58667 00:05:48.540 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:48.540 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.540 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58667 00:05:48.540 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:48.540 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:48.540 killing process with pid 58667 00:05:48.540 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58667' 00:05:48.540 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58667 00:05:48.540 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58667 00:05:48.799 00:05:48.799 real 0m2.293s 00:05:48.799 user 0m2.602s 00:05:48.799 sys 0m0.718s 00:05:48.799 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.799 10:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.799 ************************************ 00:05:48.799 END TEST non_locking_app_on_locked_coremask 00:05:48.799 ************************************ 00:05:48.799 10:27:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:48.799 10:27:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.799 10:27:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.799 10:27:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.799 ************************************ 00:05:48.799 START TEST locking_app_on_unlocked_coremask 00:05:48.799 ************************************ 00:05:48.799 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:48.799 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58715 00:05:48.799 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58715 /var/tmp/spdk.sock 00:05:48.799 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:48.799 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58715 ']' 00:05:48.799 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.799 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:48.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.799 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.799 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:48.799 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.799 [2024-11-12 10:27:37.428230] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:48.799 [2024-11-12 10:27:37.428317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58715 ] 00:05:49.058 [2024-11-12 10:27:37.573749] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.058 [2024-11-12 10:27:37.573824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.058 [2024-11-12 10:27:37.606933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.058 [2024-11-12 10:27:37.649062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.058 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:49.058 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:49.058 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58729 00:05:49.058 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58729 /var/tmp/spdk2.sock 00:05:49.058 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:49.058 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58729 ']' 00:05:49.058 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.058 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:49.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.058 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.059 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:49.059 10:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.317 [2024-11-12 10:27:37.842037] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:49.317 [2024-11-12 10:27:37.842149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58729 ] 00:05:49.317 [2024-11-12 10:27:37.999713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.317 [2024-11-12 10:27:38.067908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.576 [2024-11-12 10:27:38.148904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.143 10:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:50.143 10:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:50.143 10:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58729 00:05:50.143 10:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58729 00:05:50.143 10:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.080 10:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58715 00:05:51.080 10:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58715 ']' 00:05:51.080 10:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58715 00:05:51.080 10:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:51.080 10:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:51.080 10:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58715 00:05:51.080 10:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:51.080 10:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:51.080 killing process with pid 58715 00:05:51.080 10:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58715' 00:05:51.080 10:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58715 00:05:51.080 10:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58715 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58729 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58729 ']' 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58729 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58729 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:51.648 killing process with pid 58729 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58729' 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58729 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58729 00:05:51.648 00:05:51.648 real 0m3.043s 00:05:51.648 user 0m3.600s 00:05:51.648 sys 0m0.865s 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.648 10:27:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.648 ************************************ 00:05:51.648 END TEST locking_app_on_unlocked_coremask 00:05:51.648 ************************************ 00:05:51.908 10:27:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:51.908 10:27:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:51.908 10:27:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.908 10:27:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.908 ************************************ 00:05:51.908 START TEST locking_app_on_locked_coremask 00:05:51.908 ************************************ 00:05:51.908 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:51.908 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58785 00:05:51.908 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58785 /var/tmp/spdk.sock 00:05:51.908 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58785 ']' 00:05:51.908 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.908 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.908 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:51.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.908 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.908 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:51.908 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.908 [2024-11-12 10:27:40.532214] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:51.908 [2024-11-12 10:27:40.532326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58785 ] 00:05:52.167 [2024-11-12 10:27:40.671848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.167 [2024-11-12 10:27:40.701940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.167 [2024-11-12 10:27:40.744325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58799 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58799 /var/tmp/spdk2.sock 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58799 /var/tmp/spdk2.sock 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58799 /var/tmp/spdk2.sock 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58799 ']' 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.167 10:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.427 [2024-11-12 10:27:40.934161] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:52.427 [2024-11-12 10:27:40.934296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58799 ] 00:05:52.427 [2024-11-12 10:27:41.094471] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58785 has claimed it. 00:05:52.427 [2024-11-12 10:27:41.094543] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:52.995 ERROR: process (pid: 58799) is no longer running 00:05:52.995 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58799) - No such process 00:05:52.995 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.995 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:52.995 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:52.995 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.995 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.995 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.995 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58785 00:05:52.995 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.995 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58785 00:05:53.254 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58785 00:05:53.254 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58785 ']' 00:05:53.254 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58785 00:05:53.254 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:53.254 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:53.254 10:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58785 00:05:53.254 10:27:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:53.254 10:27:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:53.254 killing process with pid 58785 00:05:53.254 10:27:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58785' 00:05:53.254 10:27:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58785 00:05:53.254 10:27:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58785 00:05:53.514 00:05:53.514 real 0m1.799s 00:05:53.514 user 0m2.117s 00:05:53.514 sys 0m0.451s 00:05:53.514 10:27:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.514 10:27:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.514 ************************************ 00:05:53.514 END TEST locking_app_on_locked_coremask 00:05:53.514 ************************************ 00:05:53.774 10:27:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:53.774 10:27:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.774 10:27:42 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.774 10:27:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.774 ************************************ 00:05:53.774 START TEST locking_overlapped_coremask 00:05:53.774 ************************************ 00:05:53.774 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:53.774 10:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58839 00:05:53.774 10:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58839 /var/tmp/spdk.sock 00:05:53.774 10:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:53.774 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58839 ']' 00:05:53.774 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.774 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.774 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.774 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.774 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.774 [2024-11-12 10:27:42.384520] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:53.774 [2024-11-12 10:27:42.384639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58839 ] 00:05:53.774 [2024-11-12 10:27:42.527377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.034 [2024-11-12 10:27:42.564306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.034 [2024-11-12 10:27:42.564448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.034 [2024-11-12 10:27:42.564452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.034 [2024-11-12 10:27:42.604713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58850 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58850 /var/tmp/spdk2.sock 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58850 /var/tmp/spdk2.sock 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58850 /var/tmp/spdk2.sock 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58850 ']' 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:54.034 10:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.034 [2024-11-12 10:27:42.790358] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:54.034 [2024-11-12 10:27:42.790441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58850 ] 00:05:54.293 [2024-11-12 10:27:42.947909] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58839 has claimed it. 00:05:54.293 [2024-11-12 10:27:42.947979] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.862 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58850) - No such process 00:05:54.862 ERROR: process (pid: 58850) is no longer running 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58839 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 58839 ']' 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 58839 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58839 00:05:54.862 killing process with pid 58839 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58839' 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 58839 00:05:54.862 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 58839 00:05:55.122 00:05:55.122 real 0m1.504s 00:05:55.122 user 0m4.134s 00:05:55.122 sys 0m0.326s 00:05:55.122 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.122 ************************************ 00:05:55.122 END TEST locking_overlapped_coremask 00:05:55.122 ************************************ 00:05:55.122 10:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.122 10:27:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:55.122 10:27:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:55.122 10:27:43 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.122 10:27:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.122 ************************************ 00:05:55.122 START TEST locking_overlapped_coremask_via_rpc 00:05:55.122 ************************************ 00:05:55.122 10:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:55.122 10:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58894 00:05:55.122 10:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58894 /var/tmp/spdk.sock 00:05:55.122 10:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:55.122 10:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58894 ']' 00:05:55.122 10:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.122 10:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:55.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.122 10:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.122 10:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:55.122 10:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.382 [2024-11-12 10:27:43.927470] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:55.382 [2024-11-12 10:27:43.927573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58894 ] 00:05:55.382 [2024-11-12 10:27:44.068405] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.382 [2024-11-12 10:27:44.068938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.382 [2024-11-12 10:27:44.102274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.382 [2024-11-12 10:27:44.102139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.382 [2024-11-12 10:27:44.102269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.640 [2024-11-12 10:27:44.141459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.640 10:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:55.640 10:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:55.641 10:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58900 00:05:55.641 10:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58900 /var/tmp/spdk2.sock 00:05:55.641 10:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:55.641 10:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58900 ']' 00:05:55.641 10:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.641 10:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:55.641 10:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.641 10:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:55.641 10:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.641 [2024-11-12 10:27:44.340069] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:55.641 [2024-11-12 10:27:44.340200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58900 ] 00:05:55.900 [2024-11-12 10:27:44.501636] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.900 [2024-11-12 10:27:44.501681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.900 [2024-11-12 10:27:44.566491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.900 [2024-11-12 10:27:44.566551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.900 [2024-11-12 10:27:44.566552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:55.900 [2024-11-12 10:27:44.646725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.835 [2024-11-12 10:27:45.387378] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58894 has claimed it. 00:05:56.835 request: 00:05:56.835 { 00:05:56.835 "method": "framework_enable_cpumask_locks", 00:05:56.835 "req_id": 1 00:05:56.835 } 00:05:56.835 Got JSON-RPC error response 00:05:56.835 response: 00:05:56.835 { 00:05:56.835 "code": -32603, 00:05:56.835 "message": "Failed to claim CPU core: 2" 00:05:56.835 } 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58894 /var/tmp/spdk.sock 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58894 ']' 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.835 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.095 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.095 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:57.095 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58900 /var/tmp/spdk2.sock 00:05:57.095 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58900 ']' 00:05:57.095 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.095 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:57.095 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.095 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:57.095 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.354 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.354 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:57.354 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:57.354 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.354 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.354 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.354 ************************************ 00:05:57.354 END TEST locking_overlapped_coremask_via_rpc 00:05:57.354 ************************************ 00:05:57.354 00:05:57.354 real 0m2.088s 00:05:57.354 user 0m1.278s 00:05:57.354 sys 0m0.156s 00:05:57.354 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.354 10:27:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.354 10:27:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:57.354 10:27:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58894 ]] 00:05:57.354 10:27:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58894 00:05:57.354 10:27:45 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58894 ']' 00:05:57.354 10:27:45 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58894 00:05:57.354 10:27:45 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:57.354 10:27:45 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:57.354 10:27:45 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58894 00:05:57.354 10:27:46 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:57.354 10:27:46 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:57.354 10:27:46 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58894' 00:05:57.354 killing process with pid 58894 00:05:57.354 10:27:46 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58894 00:05:57.354 10:27:46 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58894 00:05:57.613 10:27:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58900 ]] 00:05:57.613 10:27:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58900 00:05:57.613 10:27:46 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58900 ']' 00:05:57.613 10:27:46 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58900 00:05:57.613 10:27:46 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:57.613 10:27:46 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:57.613 10:27:46 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58900 00:05:57.613 killing process with pid 58900 00:05:57.613 10:27:46 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:57.613 10:27:46 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:57.613 10:27:46 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58900' 00:05:57.613 10:27:46 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58900 00:05:57.613 10:27:46 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58900 00:05:57.872 10:27:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:57.872 10:27:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:57.872 10:27:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58894 ]] 00:05:57.872 10:27:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58894 00:05:57.872 Process with pid 58894 is not found 00:05:57.872 10:27:46 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58894 ']' 00:05:57.872 10:27:46 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58894 00:05:57.872 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58894) - No such process 00:05:57.872 10:27:46 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58894 is not found' 00:05:57.872 10:27:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58900 ]] 00:05:57.872 Process with pid 58900 is not found 00:05:57.872 10:27:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58900 00:05:57.872 10:27:46 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58900 ']' 00:05:57.872 10:27:46 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58900 00:05:57.872 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58900) - No such process 00:05:57.872 10:27:46 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58900 is not found' 00:05:57.872 10:27:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:57.872 ************************************ 00:05:57.872 END TEST cpu_locks 00:05:57.872 ************************************ 00:05:57.872 00:05:57.872 real 0m14.537s 00:05:57.872 user 0m27.346s 00:05:57.872 sys 0m3.969s 00:05:57.872 10:27:46 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.872 10:27:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.872 00:05:57.872 real 0m40.396s 00:05:57.872 user 1m20.831s 00:05:57.872 sys 0m7.235s 00:05:57.872 10:27:46 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.872 10:27:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.872 ************************************ 00:05:57.872 END TEST event 00:05:57.872 ************************************ 00:05:58.131 10:27:46 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.131 10:27:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.131 10:27:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.131 10:27:46 -- common/autotest_common.sh@10 -- # set +x 00:05:58.131 ************************************ 00:05:58.131 START TEST thread 00:05:58.131 ************************************ 00:05:58.131 10:27:46 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.131 * Looking for test storage... 00:05:58.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:58.131 10:27:46 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:58.131 10:27:46 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:58.131 10:27:46 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:58.131 10:27:46 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:58.131 10:27:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.131 10:27:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.131 10:27:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.131 10:27:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.131 10:27:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.131 10:27:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.131 10:27:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.131 10:27:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.131 10:27:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.131 10:27:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.131 10:27:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.131 10:27:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:58.131 10:27:46 thread -- scripts/common.sh@345 -- # : 1 00:05:58.131 10:27:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.131 10:27:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.131 10:27:46 thread -- scripts/common.sh@365 -- # decimal 1 00:05:58.131 10:27:46 thread -- scripts/common.sh@353 -- # local d=1 00:05:58.131 10:27:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.131 10:27:46 thread -- scripts/common.sh@355 -- # echo 1 00:05:58.131 10:27:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.131 10:27:46 thread -- scripts/common.sh@366 -- # decimal 2 00:05:58.131 10:27:46 thread -- scripts/common.sh@353 -- # local d=2 00:05:58.131 10:27:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.131 10:27:46 thread -- scripts/common.sh@355 -- # echo 2 00:05:58.131 10:27:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.131 10:27:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.131 10:27:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.131 10:27:46 thread -- scripts/common.sh@368 -- # return 0 00:05:58.131 10:27:46 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.131 10:27:46 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:58.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.131 --rc genhtml_branch_coverage=1 00:05:58.131 --rc genhtml_function_coverage=1 00:05:58.131 --rc genhtml_legend=1 00:05:58.131 --rc geninfo_all_blocks=1 00:05:58.131 --rc geninfo_unexecuted_blocks=1 00:05:58.131 00:05:58.131 ' 00:05:58.131 10:27:46 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:58.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.131 --rc genhtml_branch_coverage=1 00:05:58.131 --rc genhtml_function_coverage=1 00:05:58.131 --rc genhtml_legend=1 00:05:58.131 --rc geninfo_all_blocks=1 00:05:58.131 --rc geninfo_unexecuted_blocks=1 00:05:58.131 00:05:58.131 ' 00:05:58.131 10:27:46 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:58.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.131 --rc genhtml_branch_coverage=1 00:05:58.131 --rc genhtml_function_coverage=1 00:05:58.131 --rc genhtml_legend=1 00:05:58.131 --rc geninfo_all_blocks=1 00:05:58.131 --rc geninfo_unexecuted_blocks=1 00:05:58.131 00:05:58.131 ' 00:05:58.131 10:27:46 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:58.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.131 --rc genhtml_branch_coverage=1 00:05:58.131 --rc genhtml_function_coverage=1 00:05:58.131 --rc genhtml_legend=1 00:05:58.131 --rc geninfo_all_blocks=1 00:05:58.131 --rc geninfo_unexecuted_blocks=1 00:05:58.131 00:05:58.131 ' 00:05:58.131 10:27:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.131 10:27:46 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:58.132 10:27:46 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.132 10:27:46 thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.132 ************************************ 00:05:58.132 START TEST thread_poller_perf 00:05:58.132 ************************************ 00:05:58.132 10:27:46 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.132 [2024-11-12 10:27:46.858135] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:58.132 [2024-11-12 10:27:46.858267] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59031 ] 00:05:58.390 [2024-11-12 10:27:47.022502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.390 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:58.390 [2024-11-12 10:27:47.054400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.764 [2024-11-12T10:27:48.522Z] ====================================== 00:05:59.764 [2024-11-12T10:27:48.522Z] busy:2211106366 (cyc) 00:05:59.764 [2024-11-12T10:27:48.522Z] total_run_count: 341000 00:05:59.764 [2024-11-12T10:27:48.522Z] tsc_hz: 2200000000 (cyc) 00:05:59.764 [2024-11-12T10:27:48.522Z] ====================================== 00:05:59.764 [2024-11-12T10:27:48.522Z] poller_cost: 6484 (cyc), 2947 (nsec) 00:05:59.764 ************************************ 00:05:59.764 END TEST thread_poller_perf 00:05:59.764 ************************************ 00:05:59.764 00:05:59.764 real 0m1.262s 00:05:59.764 user 0m1.100s 00:05:59.764 sys 0m0.043s 00:05:59.764 10:27:48 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.764 10:27:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.764 10:27:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.764 10:27:48 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:59.764 10:27:48 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.764 10:27:48 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.764 ************************************ 00:05:59.764 START TEST thread_poller_perf 00:05:59.764 ************************************ 00:05:59.764 10:27:48 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.764 [2024-11-12 10:27:48.169436] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:05:59.764 [2024-11-12 10:27:48.169528] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59066 ] 00:05:59.764 [2024-11-12 10:27:48.313558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.765 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:59.765 [2024-11-12 10:27:48.345319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.735 [2024-11-12T10:27:49.493Z] ====================================== 00:06:00.735 [2024-11-12T10:27:49.493Z] busy:2201797594 (cyc) 00:06:00.735 [2024-11-12T10:27:49.493Z] total_run_count: 4687000 00:06:00.735 [2024-11-12T10:27:49.493Z] tsc_hz: 2200000000 (cyc) 00:06:00.735 [2024-11-12T10:27:49.493Z] ====================================== 00:06:00.735 [2024-11-12T10:27:49.493Z] poller_cost: 469 (cyc), 213 (nsec) 00:06:00.735 00:06:00.735 real 0m1.228s 00:06:00.735 user 0m1.091s 00:06:00.735 sys 0m0.030s 00:06:00.735 10:27:49 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.735 ************************************ 00:06:00.735 END TEST thread_poller_perf 00:06:00.735 ************************************ 00:06:00.735 10:27:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.735 10:27:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:00.735 00:06:00.735 real 0m2.772s 00:06:00.735 user 0m2.329s 00:06:00.735 sys 0m0.215s 00:06:00.735 10:27:49 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.735 ************************************ 00:06:00.735 END TEST thread 00:06:00.735 ************************************ 00:06:00.735 10:27:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.735 10:27:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:00.735 10:27:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:00.735 10:27:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.735 10:27:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.735 10:27:49 -- common/autotest_common.sh@10 -- # set +x 00:06:00.735 ************************************ 00:06:00.735 START TEST app_cmdline 00:06:00.735 ************************************ 00:06:00.735 10:27:49 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:00.994 * Looking for test storage... 00:06:00.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.994 10:27:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.994 --rc genhtml_branch_coverage=1 00:06:00.994 --rc genhtml_function_coverage=1 00:06:00.994 --rc genhtml_legend=1 00:06:00.994 --rc geninfo_all_blocks=1 00:06:00.994 --rc geninfo_unexecuted_blocks=1 00:06:00.994 00:06:00.994 ' 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.994 --rc genhtml_branch_coverage=1 00:06:00.994 --rc genhtml_function_coverage=1 00:06:00.994 --rc genhtml_legend=1 00:06:00.994 --rc geninfo_all_blocks=1 00:06:00.994 --rc geninfo_unexecuted_blocks=1 00:06:00.994 00:06:00.994 ' 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.994 --rc genhtml_branch_coverage=1 00:06:00.994 --rc genhtml_function_coverage=1 00:06:00.994 --rc genhtml_legend=1 00:06:00.994 --rc geninfo_all_blocks=1 00:06:00.994 --rc geninfo_unexecuted_blocks=1 00:06:00.994 00:06:00.994 ' 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.994 --rc genhtml_branch_coverage=1 00:06:00.994 --rc genhtml_function_coverage=1 00:06:00.994 --rc genhtml_legend=1 00:06:00.994 --rc geninfo_all_blocks=1 00:06:00.994 --rc geninfo_unexecuted_blocks=1 00:06:00.994 00:06:00.994 ' 00:06:00.994 10:27:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:00.994 10:27:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59143 00:06:00.994 10:27:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:00.994 10:27:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59143 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59143 ']' 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.994 10:27:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.995 [2024-11-12 10:27:49.722952] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:00.995 [2024-11-12 10:27:49.723048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59143 ] 00:06:01.253 [2024-11-12 10:27:49.866225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.253 [2024-11-12 10:27:49.893821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.253 [2024-11-12 10:27:49.930068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.511 10:27:50 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.511 10:27:50 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:01.511 10:27:50 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:01.770 { 00:06:01.770 "version": "SPDK v25.01-pre git sha1 eba7e4aea", 00:06:01.770 "fields": { 00:06:01.770 "major": 25, 00:06:01.770 "minor": 1, 00:06:01.770 "patch": 0, 00:06:01.770 "suffix": "-pre", 00:06:01.770 "commit": "eba7e4aea" 00:06:01.770 } 00:06:01.770 } 00:06:01.770 10:27:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:01.770 10:27:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:01.770 10:27:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:01.770 10:27:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:01.770 10:27:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:01.770 10:27:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.770 10:27:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.770 10:27:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:01.770 10:27:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:01.770 10:27:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:01.770 10:27:50 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.027 request: 00:06:02.027 { 00:06:02.027 "method": "env_dpdk_get_mem_stats", 00:06:02.027 "req_id": 1 00:06:02.027 } 00:06:02.027 Got JSON-RPC error response 00:06:02.027 response: 00:06:02.027 { 00:06:02.027 "code": -32601, 00:06:02.027 "message": "Method not found" 00:06:02.027 } 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.027 10:27:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59143 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59143 ']' 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59143 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59143 00:06:02.027 killing process with pid 59143 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59143' 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@971 -- # kill 59143 00:06:02.027 10:27:50 app_cmdline -- common/autotest_common.sh@976 -- # wait 59143 00:06:02.286 ************************************ 00:06:02.286 END TEST app_cmdline 00:06:02.286 ************************************ 00:06:02.286 00:06:02.286 real 0m1.515s 00:06:02.286 user 0m2.039s 00:06:02.286 sys 0m0.377s 00:06:02.286 10:27:50 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.286 10:27:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.286 10:27:51 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:02.286 10:27:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.286 10:27:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.286 10:27:51 -- common/autotest_common.sh@10 -- # set +x 00:06:02.286 ************************************ 00:06:02.286 START TEST version 00:06:02.286 ************************************ 00:06:02.286 10:27:51 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:02.545 * Looking for test storage... 00:06:02.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:02.545 10:27:51 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:02.545 10:27:51 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:02.545 10:27:51 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:02.545 10:27:51 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:02.545 10:27:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.545 10:27:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.545 10:27:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.545 10:27:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.545 10:27:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.545 10:27:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.545 10:27:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.545 10:27:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.545 10:27:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.545 10:27:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.545 10:27:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.545 10:27:51 version -- scripts/common.sh@344 -- # case "$op" in 00:06:02.545 10:27:51 version -- scripts/common.sh@345 -- # : 1 00:06:02.545 10:27:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.545 10:27:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.545 10:27:51 version -- scripts/common.sh@365 -- # decimal 1 00:06:02.545 10:27:51 version -- scripts/common.sh@353 -- # local d=1 00:06:02.545 10:27:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.545 10:27:51 version -- scripts/common.sh@355 -- # echo 1 00:06:02.545 10:27:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.545 10:27:51 version -- scripts/common.sh@366 -- # decimal 2 00:06:02.546 10:27:51 version -- scripts/common.sh@353 -- # local d=2 00:06:02.546 10:27:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.546 10:27:51 version -- scripts/common.sh@355 -- # echo 2 00:06:02.546 10:27:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.546 10:27:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.546 10:27:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.546 10:27:51 version -- scripts/common.sh@368 -- # return 0 00:06:02.546 10:27:51 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.546 10:27:51 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:02.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.546 --rc genhtml_branch_coverage=1 00:06:02.546 --rc genhtml_function_coverage=1 00:06:02.546 --rc genhtml_legend=1 00:06:02.546 --rc geninfo_all_blocks=1 00:06:02.546 --rc geninfo_unexecuted_blocks=1 00:06:02.546 00:06:02.546 ' 00:06:02.546 10:27:51 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:02.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.546 --rc genhtml_branch_coverage=1 00:06:02.546 --rc genhtml_function_coverage=1 00:06:02.546 --rc genhtml_legend=1 00:06:02.546 --rc geninfo_all_blocks=1 00:06:02.546 --rc geninfo_unexecuted_blocks=1 00:06:02.546 00:06:02.546 ' 00:06:02.546 10:27:51 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:02.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.546 --rc genhtml_branch_coverage=1 00:06:02.546 --rc genhtml_function_coverage=1 00:06:02.546 --rc genhtml_legend=1 00:06:02.546 --rc geninfo_all_blocks=1 00:06:02.546 --rc geninfo_unexecuted_blocks=1 00:06:02.546 00:06:02.546 ' 00:06:02.546 10:27:51 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:02.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.546 --rc genhtml_branch_coverage=1 00:06:02.546 --rc genhtml_function_coverage=1 00:06:02.546 --rc genhtml_legend=1 00:06:02.546 --rc geninfo_all_blocks=1 00:06:02.546 --rc geninfo_unexecuted_blocks=1 00:06:02.546 00:06:02.546 ' 00:06:02.546 10:27:51 version -- app/version.sh@17 -- # get_header_version major 00:06:02.546 10:27:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:02.546 10:27:51 version -- app/version.sh@14 -- # cut -f2 00:06:02.546 10:27:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.546 10:27:51 version -- app/version.sh@17 -- # major=25 00:06:02.546 10:27:51 version -- app/version.sh@18 -- # get_header_version minor 00:06:02.546 10:27:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:02.546 10:27:51 version -- app/version.sh@14 -- # cut -f2 00:06:02.546 10:27:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.546 10:27:51 version -- app/version.sh@18 -- # minor=1 00:06:02.546 10:27:51 version -- app/version.sh@19 -- # get_header_version patch 00:06:02.546 10:27:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:02.546 10:27:51 version -- app/version.sh@14 -- # cut -f2 00:06:02.546 10:27:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.546 10:27:51 version -- app/version.sh@19 -- # patch=0 00:06:02.546 10:27:51 version -- app/version.sh@20 -- # get_header_version suffix 00:06:02.546 10:27:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:02.546 10:27:51 version -- app/version.sh@14 -- # cut -f2 00:06:02.546 10:27:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.546 10:27:51 version -- app/version.sh@20 -- # suffix=-pre 00:06:02.546 10:27:51 version -- app/version.sh@22 -- # version=25.1 00:06:02.546 10:27:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:02.546 10:27:51 version -- app/version.sh@28 -- # version=25.1rc0 00:06:02.546 10:27:51 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:02.546 10:27:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:02.546 10:27:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:02.546 10:27:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:02.546 00:06:02.546 real 0m0.254s 00:06:02.546 user 0m0.166s 00:06:02.546 sys 0m0.122s 00:06:02.546 ************************************ 00:06:02.546 END TEST version 00:06:02.546 ************************************ 00:06:02.546 10:27:51 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.546 10:27:51 version -- common/autotest_common.sh@10 -- # set +x 00:06:02.805 10:27:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:02.805 10:27:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:02.805 10:27:51 -- spdk/autotest.sh@194 -- # uname -s 00:06:02.805 10:27:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:02.805 10:27:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:02.805 10:27:51 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:02.805 10:27:51 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:02.805 10:27:51 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:02.805 10:27:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.805 10:27:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.805 10:27:51 -- common/autotest_common.sh@10 -- # set +x 00:06:02.805 ************************************ 00:06:02.805 START TEST spdk_dd 00:06:02.805 ************************************ 00:06:02.805 10:27:51 spdk_dd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:02.805 * Looking for test storage... 00:06:02.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:02.805 10:27:51 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:02.805 10:27:51 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:06:02.805 10:27:51 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:02.805 10:27:51 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:02.805 10:27:51 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.806 10:27:51 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:02.806 10:27:51 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:02.806 10:27:51 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.806 10:27:51 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:02.806 10:27:51 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.806 10:27:51 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.806 10:27:51 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.806 10:27:51 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:02.806 10:27:51 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.806 10:27:51 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:02.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.806 --rc genhtml_branch_coverage=1 00:06:02.806 --rc genhtml_function_coverage=1 00:06:02.806 --rc genhtml_legend=1 00:06:02.806 --rc geninfo_all_blocks=1 00:06:02.806 --rc geninfo_unexecuted_blocks=1 00:06:02.806 00:06:02.806 ' 00:06:02.806 10:27:51 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:02.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.806 --rc genhtml_branch_coverage=1 00:06:02.806 --rc genhtml_function_coverage=1 00:06:02.806 --rc genhtml_legend=1 00:06:02.806 --rc geninfo_all_blocks=1 00:06:02.806 --rc geninfo_unexecuted_blocks=1 00:06:02.806 00:06:02.806 ' 00:06:02.806 10:27:51 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:02.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.806 --rc genhtml_branch_coverage=1 00:06:02.806 --rc genhtml_function_coverage=1 00:06:02.806 --rc genhtml_legend=1 00:06:02.806 --rc geninfo_all_blocks=1 00:06:02.806 --rc geninfo_unexecuted_blocks=1 00:06:02.806 00:06:02.806 ' 00:06:02.806 10:27:51 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:02.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.806 --rc genhtml_branch_coverage=1 00:06:02.806 --rc genhtml_function_coverage=1 00:06:02.806 --rc genhtml_legend=1 00:06:02.806 --rc geninfo_all_blocks=1 00:06:02.806 --rc geninfo_unexecuted_blocks=1 00:06:02.806 00:06:02.806 ' 00:06:02.806 10:27:51 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.806 10:27:51 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.806 10:27:51 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.806 10:27:51 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.806 10:27:51 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.806 10:27:51 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.806 10:27:51 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.806 10:27:51 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.806 10:27:51 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:02.806 10:27:51 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.806 10:27:51 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:03.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:03.375 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:03.375 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:03.375 10:27:51 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:03.375 10:27:51 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:03.375 10:27:51 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:03.375 10:27:51 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:03.375 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:03.376 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:03.376 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:03.376 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:03.376 10:27:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:03.376 * spdk_dd linked to liburing 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:03.376 10:27:52 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:03.376 10:27:52 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:03.377 10:27:52 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:03.377 10:27:52 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:03.377 10:27:52 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:03.377 10:27:52 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:03.377 10:27:52 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:03.377 10:27:52 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:03.377 10:27:52 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:03.377 10:27:52 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:03.377 10:27:52 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.377 10:27:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:03.377 ************************************ 00:06:03.377 START TEST spdk_dd_basic_rw 00:06:03.377 ************************************ 00:06:03.377 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:03.377 * Looking for test storage... 00:06:03.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:03.377 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:03.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.637 --rc genhtml_branch_coverage=1 00:06:03.637 --rc genhtml_function_coverage=1 00:06:03.637 --rc genhtml_legend=1 00:06:03.637 --rc geninfo_all_blocks=1 00:06:03.637 --rc geninfo_unexecuted_blocks=1 00:06:03.637 00:06:03.637 ' 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:03.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.637 --rc genhtml_branch_coverage=1 00:06:03.637 --rc genhtml_function_coverage=1 00:06:03.637 --rc genhtml_legend=1 00:06:03.637 --rc geninfo_all_blocks=1 00:06:03.637 --rc geninfo_unexecuted_blocks=1 00:06:03.637 00:06:03.637 ' 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:03.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.637 --rc genhtml_branch_coverage=1 00:06:03.637 --rc genhtml_function_coverage=1 00:06:03.637 --rc genhtml_legend=1 00:06:03.637 --rc geninfo_all_blocks=1 00:06:03.637 --rc geninfo_unexecuted_blocks=1 00:06:03.637 00:06:03.637 ' 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:03.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.637 --rc genhtml_branch_coverage=1 00:06:03.637 --rc genhtml_function_coverage=1 00:06:03.637 --rc genhtml_legend=1 00:06:03.637 --rc geninfo_all_blocks=1 00:06:03.637 --rc geninfo_unexecuted_blocks=1 00:06:03.637 00:06:03.637 ' 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:03.637 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:03.899 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:03.899 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.900 ************************************ 00:06:03.900 START TEST dd_bs_lt_native_bs 00:06:03.900 ************************************ 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1127 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:03.900 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:03.900 { 00:06:03.900 "subsystems": [ 00:06:03.900 { 00:06:03.900 "subsystem": "bdev", 00:06:03.900 "config": [ 00:06:03.900 { 00:06:03.900 "params": { 00:06:03.900 "trtype": "pcie", 00:06:03.900 "traddr": "0000:00:10.0", 00:06:03.900 "name": "Nvme0" 00:06:03.900 }, 00:06:03.900 "method": "bdev_nvme_attach_controller" 00:06:03.900 }, 00:06:03.900 { 00:06:03.900 "method": "bdev_wait_for_examine" 00:06:03.900 } 00:06:03.900 ] 00:06:03.900 } 00:06:03.900 ] 00:06:03.900 } 00:06:03.900 [2024-11-12 10:27:52.524944] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:03.900 [2024-11-12 10:27:52.525043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59487 ] 00:06:04.160 [2024-11-12 10:27:52.676804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.160 [2024-11-12 10:27:52.719108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.160 [2024-11-12 10:27:52.757134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.160 [2024-11-12 10:27:52.854456] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:04.160 [2024-11-12 10:27:52.854529] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.418 [2024-11-12 10:27:52.939158] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:04.418 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:06:04.418 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.418 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:06:04.418 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:06:04.418 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:06:04.418 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.418 00:06:04.418 real 0m0.531s 00:06:04.418 user 0m0.371s 00:06:04.418 sys 0m0.118s 00:06:04.418 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:04.418 10:27:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:04.418 ************************************ 00:06:04.418 END TEST dd_bs_lt_native_bs 00:06:04.418 ************************************ 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.418 ************************************ 00:06:04.418 START TEST dd_rw 00:06:04.418 ************************************ 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1127 -- # basic_rw 4096 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:04.418 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.984 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:04.984 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:04.984 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.984 10:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.243 [2024-11-12 10:27:53.760330] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:05.243 [2024-11-12 10:27:53.760431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59518 ] 00:06:05.243 { 00:06:05.243 "subsystems": [ 00:06:05.243 { 00:06:05.243 "subsystem": "bdev", 00:06:05.243 "config": [ 00:06:05.243 { 00:06:05.243 "params": { 00:06:05.243 "trtype": "pcie", 00:06:05.243 "traddr": "0000:00:10.0", 00:06:05.243 "name": "Nvme0" 00:06:05.243 }, 00:06:05.243 "method": "bdev_nvme_attach_controller" 00:06:05.243 }, 00:06:05.243 { 00:06:05.243 "method": "bdev_wait_for_examine" 00:06:05.243 } 00:06:05.243 ] 00:06:05.243 } 00:06:05.243 ] 00:06:05.243 } 00:06:05.243 [2024-11-12 10:27:53.911860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.243 [2024-11-12 10:27:53.953115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.243 [2024-11-12 10:27:53.990517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.502  [2024-11-12T10:27:54.260Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:05.502 00:06:05.502 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:05.502 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:05.502 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:05.502 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.762 { 00:06:05.762 "subsystems": [ 00:06:05.762 { 00:06:05.762 "subsystem": "bdev", 00:06:05.762 "config": [ 00:06:05.762 { 00:06:05.762 "params": { 00:06:05.762 "trtype": "pcie", 00:06:05.762 "traddr": "0000:00:10.0", 00:06:05.762 "name": "Nvme0" 00:06:05.762 }, 00:06:05.762 "method": "bdev_nvme_attach_controller" 00:06:05.762 }, 00:06:05.762 { 00:06:05.762 "method": "bdev_wait_for_examine" 00:06:05.762 } 00:06:05.762 ] 00:06:05.762 } 00:06:05.762 ] 00:06:05.762 } 00:06:05.762 [2024-11-12 10:27:54.281729] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:05.762 [2024-11-12 10:27:54.281855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59532 ] 00:06:05.762 [2024-11-12 10:27:54.439749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.762 [2024-11-12 10:27:54.469779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.762 [2024-11-12 10:27:54.499280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.021  [2024-11-12T10:27:54.779Z] Copying: 60/60 [kB] (average 14 MBps) 00:06:06.021 00:06:06.021 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.021 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:06.021 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:06.021 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:06.021 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:06.021 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:06.021 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:06.021 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:06.021 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:06.021 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:06.021 10:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.021 { 00:06:06.021 "subsystems": [ 00:06:06.021 { 00:06:06.021 "subsystem": "bdev", 00:06:06.021 "config": [ 00:06:06.021 { 00:06:06.021 "params": { 00:06:06.021 "trtype": "pcie", 00:06:06.021 "traddr": "0000:00:10.0", 00:06:06.021 "name": "Nvme0" 00:06:06.021 }, 00:06:06.021 "method": "bdev_nvme_attach_controller" 00:06:06.021 }, 00:06:06.021 { 00:06:06.021 "method": "bdev_wait_for_examine" 00:06:06.021 } 00:06:06.021 ] 00:06:06.021 } 00:06:06.021 ] 00:06:06.021 } 00:06:06.021 [2024-11-12 10:27:54.775901] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:06.021 [2024-11-12 10:27:54.776140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59547 ] 00:06:06.279 [2024-11-12 10:27:54.921692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.279 [2024-11-12 10:27:54.951506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.279 [2024-11-12 10:27:54.981097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.538  [2024-11-12T10:27:55.296Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:06.538 00:06:06.538 10:27:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:06.538 10:27:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:06.538 10:27:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:06.538 10:27:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:06.538 10:27:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:06.538 10:27:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:06.538 10:27:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.105 10:27:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:07.105 10:27:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:07.105 10:27:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:07.105 10:27:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.105 { 00:06:07.105 "subsystems": [ 00:06:07.105 { 00:06:07.105 "subsystem": "bdev", 00:06:07.105 "config": [ 00:06:07.105 { 00:06:07.105 "params": { 00:06:07.105 "trtype": "pcie", 00:06:07.105 "traddr": "0000:00:10.0", 00:06:07.105 "name": "Nvme0" 00:06:07.105 }, 00:06:07.105 "method": "bdev_nvme_attach_controller" 00:06:07.105 }, 00:06:07.105 { 00:06:07.105 "method": "bdev_wait_for_examine" 00:06:07.105 } 00:06:07.105 ] 00:06:07.105 } 00:06:07.105 ] 00:06:07.105 } 00:06:07.105 [2024-11-12 10:27:55.805872] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:07.105 [2024-11-12 10:27:55.805967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59566 ] 00:06:07.364 [2024-11-12 10:27:55.954313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.364 [2024-11-12 10:27:55.985294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.364 [2024-11-12 10:27:56.013115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.364  [2024-11-12T10:27:56.381Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:07.623 00:06:07.623 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:07.623 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:07.623 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:07.623 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.623 { 00:06:07.623 "subsystems": [ 00:06:07.623 { 00:06:07.623 "subsystem": "bdev", 00:06:07.623 "config": [ 00:06:07.623 { 00:06:07.623 "params": { 00:06:07.623 "trtype": "pcie", 00:06:07.623 "traddr": "0000:00:10.0", 00:06:07.623 "name": "Nvme0" 00:06:07.623 }, 00:06:07.623 "method": "bdev_nvme_attach_controller" 00:06:07.623 }, 00:06:07.623 { 00:06:07.623 "method": "bdev_wait_for_examine" 00:06:07.623 } 00:06:07.623 ] 00:06:07.623 } 00:06:07.623 ] 00:06:07.623 } 00:06:07.623 [2024-11-12 10:27:56.277840] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:07.623 [2024-11-12 10:27:56.277931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59580 ] 00:06:07.882 [2024-11-12 10:27:56.421739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.882 [2024-11-12 10:27:56.450915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.882 [2024-11-12 10:27:56.479332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.882  [2024-11-12T10:27:56.899Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:08.141 00:06:08.141 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.141 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:08.141 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:08.141 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:08.141 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:08.141 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:08.141 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:08.141 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:08.141 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:08.141 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:08.141 10:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.141 { 00:06:08.141 "subsystems": [ 00:06:08.141 { 00:06:08.141 "subsystem": "bdev", 00:06:08.141 "config": [ 00:06:08.141 { 00:06:08.141 "params": { 00:06:08.141 "trtype": "pcie", 00:06:08.141 "traddr": "0000:00:10.0", 00:06:08.141 "name": "Nvme0" 00:06:08.141 }, 00:06:08.141 "method": "bdev_nvme_attach_controller" 00:06:08.141 }, 00:06:08.141 { 00:06:08.141 "method": "bdev_wait_for_examine" 00:06:08.141 } 00:06:08.141 ] 00:06:08.141 } 00:06:08.141 ] 00:06:08.141 } 00:06:08.141 [2024-11-12 10:27:56.760045] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:08.141 [2024-11-12 10:27:56.760327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59595 ] 00:06:08.400 [2024-11-12 10:27:56.906453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.400 [2024-11-12 10:27:56.934760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.400 [2024-11-12 10:27:56.962692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.400  [2024-11-12T10:27:57.416Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:08.658 00:06:08.658 10:27:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:08.658 10:27:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:08.658 10:27:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:08.658 10:27:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:08.658 10:27:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:08.658 10:27:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:08.658 10:27:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:08.658 10:27:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.224 10:27:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:09.224 10:27:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:09.224 10:27:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:09.224 10:27:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.224 [2024-11-12 10:27:57.756783] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:09.224 [2024-11-12 10:27:57.757080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59614 ] 00:06:09.224 { 00:06:09.224 "subsystems": [ 00:06:09.224 { 00:06:09.224 "subsystem": "bdev", 00:06:09.224 "config": [ 00:06:09.224 { 00:06:09.224 "params": { 00:06:09.224 "trtype": "pcie", 00:06:09.224 "traddr": "0000:00:10.0", 00:06:09.224 "name": "Nvme0" 00:06:09.224 }, 00:06:09.224 "method": "bdev_nvme_attach_controller" 00:06:09.224 }, 00:06:09.225 { 00:06:09.225 "method": "bdev_wait_for_examine" 00:06:09.225 } 00:06:09.225 ] 00:06:09.225 } 00:06:09.225 ] 00:06:09.225 } 00:06:09.225 [2024-11-12 10:27:57.903819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.225 [2024-11-12 10:27:57.937157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.225 [2024-11-12 10:27:57.967308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.483  [2024-11-12T10:27:58.241Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:09.483 00:06:09.483 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:09.483 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:09.483 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:09.483 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.483 [2024-11-12 10:27:58.232815] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:09.483 [2024-11-12 10:27:58.233096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59628 ] 00:06:09.483 { 00:06:09.483 "subsystems": [ 00:06:09.483 { 00:06:09.483 "subsystem": "bdev", 00:06:09.483 "config": [ 00:06:09.483 { 00:06:09.483 "params": { 00:06:09.483 "trtype": "pcie", 00:06:09.483 "traddr": "0000:00:10.0", 00:06:09.483 "name": "Nvme0" 00:06:09.483 }, 00:06:09.483 "method": "bdev_nvme_attach_controller" 00:06:09.483 }, 00:06:09.483 { 00:06:09.483 "method": "bdev_wait_for_examine" 00:06:09.483 } 00:06:09.483 ] 00:06:09.483 } 00:06:09.483 ] 00:06:09.483 } 00:06:09.742 [2024-11-12 10:27:58.375394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.742 [2024-11-12 10:27:58.402627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.742 [2024-11-12 10:27:58.432956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.001  [2024-11-12T10:27:58.759Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:10.001 00:06:10.001 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.001 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:10.001 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:10.001 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:10.001 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:10.001 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:10.001 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:10.001 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:10.001 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:10.001 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.001 10:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.001 { 00:06:10.001 "subsystems": [ 00:06:10.001 { 00:06:10.001 "subsystem": "bdev", 00:06:10.001 "config": [ 00:06:10.001 { 00:06:10.001 "params": { 00:06:10.001 "trtype": "pcie", 00:06:10.001 "traddr": "0000:00:10.0", 00:06:10.001 "name": "Nvme0" 00:06:10.001 }, 00:06:10.001 "method": "bdev_nvme_attach_controller" 00:06:10.001 }, 00:06:10.001 { 00:06:10.001 "method": "bdev_wait_for_examine" 00:06:10.001 } 00:06:10.001 ] 00:06:10.001 } 00:06:10.001 ] 00:06:10.001 } 00:06:10.001 [2024-11-12 10:27:58.707465] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:10.001 [2024-11-12 10:27:58.707713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59643 ] 00:06:10.260 [2024-11-12 10:27:58.852136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.260 [2024-11-12 10:27:58.879325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.260 [2024-11-12 10:27:58.910229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.260  [2024-11-12T10:27:59.277Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:10.519 00:06:10.519 10:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:10.519 10:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:10.519 10:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:10.519 10:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:10.519 10:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:10.519 10:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:10.519 10:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.087 10:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:11.087 10:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:11.087 10:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.087 10:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.087 [2024-11-12 10:27:59.725263] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:11.087 [2024-11-12 10:27:59.725532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59662 ] 00:06:11.087 { 00:06:11.087 "subsystems": [ 00:06:11.087 { 00:06:11.087 "subsystem": "bdev", 00:06:11.087 "config": [ 00:06:11.087 { 00:06:11.087 "params": { 00:06:11.087 "trtype": "pcie", 00:06:11.087 "traddr": "0000:00:10.0", 00:06:11.087 "name": "Nvme0" 00:06:11.087 }, 00:06:11.087 "method": "bdev_nvme_attach_controller" 00:06:11.087 }, 00:06:11.087 { 00:06:11.087 "method": "bdev_wait_for_examine" 00:06:11.087 } 00:06:11.087 ] 00:06:11.087 } 00:06:11.087 ] 00:06:11.087 } 00:06:11.345 [2024-11-12 10:27:59.869350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.345 [2024-11-12 10:27:59.896605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.345 [2024-11-12 10:27:59.926386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.345  [2024-11-12T10:28:00.361Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:11.603 00:06:11.603 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:11.603 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:11.603 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.603 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.603 [2024-11-12 10:28:00.199666] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:11.603 [2024-11-12 10:28:00.199753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59676 ] 00:06:11.603 { 00:06:11.603 "subsystems": [ 00:06:11.603 { 00:06:11.603 "subsystem": "bdev", 00:06:11.603 "config": [ 00:06:11.603 { 00:06:11.603 "params": { 00:06:11.603 "trtype": "pcie", 00:06:11.603 "traddr": "0000:00:10.0", 00:06:11.603 "name": "Nvme0" 00:06:11.603 }, 00:06:11.603 "method": "bdev_nvme_attach_controller" 00:06:11.603 }, 00:06:11.603 { 00:06:11.603 "method": "bdev_wait_for_examine" 00:06:11.603 } 00:06:11.603 ] 00:06:11.603 } 00:06:11.603 ] 00:06:11.603 } 00:06:11.603 [2024-11-12 10:28:00.344280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.862 [2024-11-12 10:28:00.372266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.862 [2024-11-12 10:28:00.399553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.862  [2024-11-12T10:28:00.620Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:11.862 00:06:11.862 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.862 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:11.862 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:11.862 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:11.862 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:11.862 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:11.862 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:11.862 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:11.862 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:11.862 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.862 10:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.122 [2024-11-12 10:28:00.680318] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:12.122 [2024-11-12 10:28:00.680614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59691 ] 00:06:12.122 { 00:06:12.122 "subsystems": [ 00:06:12.122 { 00:06:12.122 "subsystem": "bdev", 00:06:12.122 "config": [ 00:06:12.122 { 00:06:12.122 "params": { 00:06:12.122 "trtype": "pcie", 00:06:12.122 "traddr": "0000:00:10.0", 00:06:12.122 "name": "Nvme0" 00:06:12.122 }, 00:06:12.122 "method": "bdev_nvme_attach_controller" 00:06:12.122 }, 00:06:12.122 { 00:06:12.122 "method": "bdev_wait_for_examine" 00:06:12.122 } 00:06:12.122 ] 00:06:12.122 } 00:06:12.122 ] 00:06:12.122 } 00:06:12.122 [2024-11-12 10:28:00.821865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.122 [2024-11-12 10:28:00.849284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.122 [2024-11-12 10:28:00.877782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.381  [2024-11-12T10:28:01.139Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:12.381 00:06:12.381 10:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:12.381 10:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:12.381 10:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:12.381 10:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:12.381 10:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:12.381 10:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:12.381 10:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:12.381 10:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.947 10:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:12.947 10:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:12.947 10:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.947 10:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.947 { 00:06:12.947 "subsystems": [ 00:06:12.947 { 00:06:12.947 "subsystem": "bdev", 00:06:12.947 "config": [ 00:06:12.947 { 00:06:12.947 "params": { 00:06:12.947 "trtype": "pcie", 00:06:12.947 "traddr": "0000:00:10.0", 00:06:12.947 "name": "Nvme0" 00:06:12.947 }, 00:06:12.947 "method": "bdev_nvme_attach_controller" 00:06:12.947 }, 00:06:12.947 { 00:06:12.947 "method": "bdev_wait_for_examine" 00:06:12.947 } 00:06:12.947 ] 00:06:12.947 } 00:06:12.947 ] 00:06:12.947 } 00:06:12.947 [2024-11-12 10:28:01.600280] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:12.947 [2024-11-12 10:28:01.600564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59705 ] 00:06:13.206 [2024-11-12 10:28:01.745500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.206 [2024-11-12 10:28:01.772910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.206 [2024-11-12 10:28:01.800887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.206  [2024-11-12T10:28:02.223Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:13.465 00:06:13.465 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:13.465 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:13.465 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.465 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.465 [2024-11-12 10:28:02.073636] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:13.465 [2024-11-12 10:28:02.073727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59724 ] 00:06:13.465 { 00:06:13.465 "subsystems": [ 00:06:13.465 { 00:06:13.465 "subsystem": "bdev", 00:06:13.465 "config": [ 00:06:13.465 { 00:06:13.465 "params": { 00:06:13.465 "trtype": "pcie", 00:06:13.465 "traddr": "0000:00:10.0", 00:06:13.465 "name": "Nvme0" 00:06:13.465 }, 00:06:13.465 "method": "bdev_nvme_attach_controller" 00:06:13.465 }, 00:06:13.465 { 00:06:13.465 "method": "bdev_wait_for_examine" 00:06:13.465 } 00:06:13.465 ] 00:06:13.465 } 00:06:13.465 ] 00:06:13.465 } 00:06:13.465 [2024-11-12 10:28:02.217880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.724 [2024-11-12 10:28:02.248205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.724 [2024-11-12 10:28:02.275714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.724  [2024-11-12T10:28:02.741Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:13.983 00:06:13.983 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.983 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:13.983 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:13.983 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:13.983 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:13.983 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:13.983 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:13.983 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:13.983 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:13.983 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.983 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.983 [2024-11-12 10:28:02.546925] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:13.983 [2024-11-12 10:28:02.547017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59734 ] 00:06:13.983 { 00:06:13.983 "subsystems": [ 00:06:13.983 { 00:06:13.983 "subsystem": "bdev", 00:06:13.983 "config": [ 00:06:13.983 { 00:06:13.983 "params": { 00:06:13.983 "trtype": "pcie", 00:06:13.983 "traddr": "0000:00:10.0", 00:06:13.983 "name": "Nvme0" 00:06:13.983 }, 00:06:13.983 "method": "bdev_nvme_attach_controller" 00:06:13.983 }, 00:06:13.983 { 00:06:13.983 "method": "bdev_wait_for_examine" 00:06:13.983 } 00:06:13.983 ] 00:06:13.983 } 00:06:13.984 ] 00:06:13.984 } 00:06:13.984 [2024-11-12 10:28:02.693159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.984 [2024-11-12 10:28:02.725270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.242 [2024-11-12 10:28:02.753785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.242  [2024-11-12T10:28:03.000Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:14.242 00:06:14.243 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:14.243 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:14.243 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:14.243 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:14.243 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:14.243 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:14.243 10:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.810 10:28:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:14.810 10:28:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:14.810 10:28:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.810 10:28:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.810 [2024-11-12 10:28:03.488513] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:14.810 [2024-11-12 10:28:03.488798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59753 ] 00:06:14.810 { 00:06:14.810 "subsystems": [ 00:06:14.810 { 00:06:14.810 "subsystem": "bdev", 00:06:14.810 "config": [ 00:06:14.810 { 00:06:14.810 "params": { 00:06:14.810 "trtype": "pcie", 00:06:14.810 "traddr": "0000:00:10.0", 00:06:14.810 "name": "Nvme0" 00:06:14.810 }, 00:06:14.810 "method": "bdev_nvme_attach_controller" 00:06:14.810 }, 00:06:14.810 { 00:06:14.810 "method": "bdev_wait_for_examine" 00:06:14.810 } 00:06:14.810 ] 00:06:14.810 } 00:06:14.810 ] 00:06:14.810 } 00:06:15.069 [2024-11-12 10:28:03.633339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.069 [2024-11-12 10:28:03.664646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.069 [2024-11-12 10:28:03.695671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.069  [2024-11-12T10:28:04.086Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:15.328 00:06:15.328 10:28:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:15.328 10:28:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:15.328 10:28:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.328 10:28:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.328 { 00:06:15.328 "subsystems": [ 00:06:15.328 { 00:06:15.328 "subsystem": "bdev", 00:06:15.328 "config": [ 00:06:15.328 { 00:06:15.328 "params": { 00:06:15.328 "trtype": "pcie", 00:06:15.328 "traddr": "0000:00:10.0", 00:06:15.328 "name": "Nvme0" 00:06:15.328 }, 00:06:15.328 "method": "bdev_nvme_attach_controller" 00:06:15.328 }, 00:06:15.328 { 00:06:15.328 "method": "bdev_wait_for_examine" 00:06:15.328 } 00:06:15.328 ] 00:06:15.328 } 00:06:15.328 ] 00:06:15.328 } 00:06:15.328 [2024-11-12 10:28:03.964421] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:15.328 [2024-11-12 10:28:03.964511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59766 ] 00:06:15.587 [2024-11-12 10:28:04.108743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.587 [2024-11-12 10:28:04.136352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.587 [2024-11-12 10:28:04.164233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.587  [2024-11-12T10:28:04.605Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:15.847 00:06:15.847 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:15.847 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:15.847 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:15.847 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:15.847 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:15.847 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:15.847 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:15.847 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:15.847 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:15.847 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.847 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.847 [2024-11-12 10:28:04.436149] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:15.847 [2024-11-12 10:28:04.436256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59782 ] 00:06:15.847 { 00:06:15.847 "subsystems": [ 00:06:15.847 { 00:06:15.847 "subsystem": "bdev", 00:06:15.847 "config": [ 00:06:15.847 { 00:06:15.847 "params": { 00:06:15.847 "trtype": "pcie", 00:06:15.847 "traddr": "0000:00:10.0", 00:06:15.847 "name": "Nvme0" 00:06:15.847 }, 00:06:15.847 "method": "bdev_nvme_attach_controller" 00:06:15.847 }, 00:06:15.847 { 00:06:15.847 "method": "bdev_wait_for_examine" 00:06:15.847 } 00:06:15.847 ] 00:06:15.847 } 00:06:15.847 ] 00:06:15.847 } 00:06:15.847 [2024-11-12 10:28:04.579009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.106 [2024-11-12 10:28:04.607831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.106 [2024-11-12 10:28:04.636016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.106  [2024-11-12T10:28:04.864Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:16.106 00:06:16.106 ************************************ 00:06:16.106 END TEST dd_rw 00:06:16.106 ************************************ 00:06:16.106 00:06:16.106 real 0m11.793s 00:06:16.106 user 0m8.788s 00:06:16.106 sys 0m3.548s 00:06:16.106 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:16.106 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.366 ************************************ 00:06:16.366 START TEST dd_rw_offset 00:06:16.366 ************************************ 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1127 -- # basic_offset 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=nfop7wba4x6bdp8c4vgqt8c7si89d8yiqlhgf5dri365lkmz881x6nhsr0ykk3v13olpoqnrso1o5z06bckdktxkwt0njz66g7h40sqmqbrj85hqunzbywh3bu2yacmvlsrjgtpuivqr4ht6vfsf4o6kw2gjnl5my9z0fnpmvk7hfr69khd5ldr0wjsarn1uqcyoox3u3c29wbkf4d993maxep8hgdxe9s6rp6y0p71vymli29vodykoryym7bqebf3xjucoolc8x6lt6fs8csdcsod86zg854dkarpxodetp2su286gysmha43jp9f25ze19nvgs9y4a488qq6atlr2uilmmxjdas7s1clr4bevn5sr9f2lvrz96ds5inbwqs8qjmbhlcb4fo5rqea7pkgz72asgwsqov7d5tie1lu4i23yz5ia5esvv1io99ncgi7rkrtq6jxnz6f5nld86e5204lvaydqigiisbguvwcppds769q8lqr25yi993ljbhls9yc7l3vz7xm1s03fg4757oqe8g3eio16swofn8ktk7f426jg3hs5ox22gm79nf4g2zzsxgqreq27ms5qcie8oswa351s65zbwupx2kata50m3w9f3tx5izc0odulsr5ml4bpoq44v4t8duz32k0efj0j5ywvd10c8m4u8jgk6knopofyiqiebci1okrlounixxx400i548kl0qwu1cirym4jwdeo9eyvyqmxy4dpypjwns4mkt14f0s3j77vagi5m0llr86aqgzkk7xmk0gqcp5j5y8ddaje73mmh0si44xjbwhxfdmumxchebpdsre099jt2cf607x1gvhm6hvjwc1l6l0uj3y3j7w93ilh9wuwqzax0cydrewb2mkylln33i62qla259lz6oz3izoi2raokrcogywje55h2hxuguu1yhq0sj3yog5n11rhqjdknkb60419lmgcgcpt61o3jqbobenvx4m83xbxebth608p1myww7oad4r9kf6ubatlydljbfd9jnd71wc5ll4fdlifr7pv8ucuu6gp16v5fih72hfiz5h0ffcwh9libgrvztjyy3o29mdnjm38gq8sby6v2kwcet7qt67qu4ru5o2oo0bcrc998u3upbhp19qwz7piesoj2nxyqehw7hw0a3eiyu4birciv6vj6851mympk54uxhdct0c5pukm9u44a6ogwm5j6rci0j1f2szlr88s8q5d26tezg0ehfvx32f0v99neflkh5akvm82227u1vz62rjxwy94rcmpuajuhvkax36ts23e62kz2as5g6r5lzs14m8cm6b1uysauc2x7wertza3k3v020gzxgom2g5pdnkr8gn8k8h28d6u50pqwjpt91fjy9otb7qnlvcxz8ickioi9jjppcnomv6113un9uktm62ma93dozkz778gufw9f1ac6fx28m3xzjqc81ba0jocm2tn20unl1t6o12klblikxl9qt1mzkxmt1auec4jvs1rk7r1m2zzxuxp4xxsurwovtfpuwuu3nkvidh4wzvatpqumr6f5exroq0lswdqcn8ptdeud6xxfnjvxf883f2hcbverwyl31u0im3q99qhvy84ow8rjqa2u2qvfvgvllfke9asg6fwvrqsw9qpjp8ikbwdmxlu0mza9hjnlg93mgdwrzzi3njfjgnoaubuazchuyzd6pgm8ffnyjvkp7nx6c44cgu5a1dmoapc32wucsats4tike1qgj1cb2ftyhc6wqqlm4gd1r05dm1tnki75cz10vwu7bsne2tmmu56zunuf0ormvm9h06z9tk7o5z37rmglejx9wxpqtimir8mkpszxjzecczqec9ho5k6188yvyw97r7x3dkoyswf9jljfvlqnli9wbh71knlvfa3jvi1sfux89xyffdqoe6z4qb4g30yz9ma0ss6oqkykiwrghyi36hhcqb517o4330hhhwjmrsffnhiv3zkwlbj5ptv8rjxzxf9q0d53zz1y22zd2wvnl8kiqqrfme59xrtpdtqrogtan0uqwg1k6zcid336o4l2ypmd9opw1m3opa03eg2nndfsu4i09w0ozbi5rcagl6n8gtlpx7h1odo5axp62vjt0xhsw2hx4jrnrz78fjbeix46cr4x3o10qdjaguxlin3ib634xvxrdfztew6bs439k0u5ys1w33lsyjirydyyq32pb24zip6n1zllzatchcog6ika11it2hqvljdvl5jt8gsv6at8572mnjvwgnz4pk5ifkal4z36n3h354om7lirv30e2y7odjn4pevs1bszkwv92a5rq5q3nspghkhp6r3vz7xp6t5gddhm4lcnklss2c2rxfia84wg0l9j5b9j4cxsxin3yd5lnk1p6zh81mtcse3k911c0raertbvg355x3m93hmb82e2tbtqa8imcihh7e27ilur9rk4xuxeef73vqbhny8a1306gsdya5zz0z9rbhqiqnjcmrplbjwlxg1lo30r6a25gt22sc9juegc1wtvfro463c773sypd57nqcx7mf5mfmrsxvooih8ebw9rf41xd5yz97emp1lt38g18d8d55tcg2yg5f769esp8irglmux9f7bq2wut1bfgx7oq5t8xuilv4os6w0wjw0nw0fqzfrfy3hq7x042m5jvyy7cf42fxo6fable6ekbbkqpglm35x9gryeal0tucej1zq8zqg748c37jxvxzr1kwe3e2h38ouznjl4dor2bhwne3edjl03scl15nlgxogf5tq3f9uuopyey3eikwa8kuv87dar1kgfj6m9prm0hulbl8yzqesuj9i124humckcqmhabkodv2zikb1e0b9emoiq8br9ec9was3xu16uo0shqq4lonr7n21027imvtj5sqca7g55t058tpz3ra3c3l0j7y1dutzgyl15zmsx7woys43eblatuawhug3nu6i7qiu4vkinq8gf4jihbjlyp10ghwekgpomz1nlhs1nkna9dkyyslesnt1ex4x03q68q4nrvkemsa92e1alpak9gr7cyztpi8nafjwzldirgkuhfxbgc8aukncx2mnjfdq1wqgj0soc9wflaod9oclxhfm1xfrcipz4hooyosqoirzdccvcmi4auidkab9tvsnmdd2990lnfyjagqjroi9y45v8ee9a4s9o0gfs3wdjdckcou17z7ok84cl75d6ta9kz03zaq3mqajfkcuxh090uhy8v8qa9lhifpdsdalamzn7baflycfv9j4d4dj53049j3a1sci10ygp7e9h863lnm6oxznfjsfppmbgyoqx7hqq04i3s3zmdatmctk4kvfwhkv466fynm5vk4t9yq35slv0m809u1m46lhcio7lor1552f4eigzdroqdg5sg8fvfvleyiznycq98ev18ngp8dxtx4rvnn3ntycv8vegm7lh4vf7fscogk6zmhgbyk0dg5doctyvu6odtdp2ifuxkcfs1jl2614u8hpn5byk6a1uzaviuuo07lz4q0elkpjbeid58j3146ouwfrderpfauxzx2etq9e0aaeav2udeibzah6w0lub18bjc29tz3pwngdfgdpkv4at1b8lapis7vyc9um6dqaxj7bnzgan02qqxgx6lvzscyguvv9u28dw6fzhglv3d13hlkl173ubd7m8i4ld8v6gcor6fp18kj13crxsisgfl0dhz0omlbk7q46ym5pepv48fl9fkoz3lal3w2h69i9ol4sqk57ysxx5lz9tcolxvcg2ii6284sc9hplnwjjvfdgthcc6ngqw2hik57o4rpfowe0iebm91btjtl69netbdelrh6kmmce7v6ozfrpufawebfrhbkrok0vovx0k3bbd452fhsdwxngcee3938emncqgnf17c6hdz7lmnzbz48w42hrip5de6w1vofb6ra510fdszjskva6f4ir2d570x6mulno0lwcatdal50c3w66c6rpfvni7uiqo1pvpdvxq1xcrp66i85qgw7oczkwpsh4lt1fmc 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:16.366 10:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:16.366 { 00:06:16.366 "subsystems": [ 00:06:16.366 { 00:06:16.366 "subsystem": "bdev", 00:06:16.366 "config": [ 00:06:16.366 { 00:06:16.366 "params": { 00:06:16.366 "trtype": "pcie", 00:06:16.366 "traddr": "0000:00:10.0", 00:06:16.366 "name": "Nvme0" 00:06:16.366 }, 00:06:16.366 "method": "bdev_nvme_attach_controller" 00:06:16.366 }, 00:06:16.366 { 00:06:16.366 "method": "bdev_wait_for_examine" 00:06:16.366 } 00:06:16.366 ] 00:06:16.366 } 00:06:16.366 ] 00:06:16.366 } 00:06:16.366 [2024-11-12 10:28:05.012207] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:16.366 [2024-11-12 10:28:05.012301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59812 ] 00:06:16.626 [2024-11-12 10:28:05.158255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.626 [2024-11-12 10:28:05.185822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.626 [2024-11-12 10:28:05.213582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.626  [2024-11-12T10:28:05.642Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:16.884 00:06:16.884 10:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:16.884 10:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:16.884 10:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:16.884 10:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:16.884 [2024-11-12 10:28:05.492237] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:16.884 [2024-11-12 10:28:05.492328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59826 ] 00:06:16.884 { 00:06:16.884 "subsystems": [ 00:06:16.884 { 00:06:16.884 "subsystem": "bdev", 00:06:16.884 "config": [ 00:06:16.884 { 00:06:16.884 "params": { 00:06:16.884 "trtype": "pcie", 00:06:16.884 "traddr": "0000:00:10.0", 00:06:16.884 "name": "Nvme0" 00:06:16.884 }, 00:06:16.884 "method": "bdev_nvme_attach_controller" 00:06:16.884 }, 00:06:16.884 { 00:06:16.884 "method": "bdev_wait_for_examine" 00:06:16.884 } 00:06:16.884 ] 00:06:16.884 } 00:06:16.884 ] 00:06:16.884 } 00:06:16.884 [2024-11-12 10:28:05.637081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.143 [2024-11-12 10:28:05.665496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.143 [2024-11-12 10:28:05.693014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.143  [2024-11-12T10:28:06.161Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:17.403 00:06:17.403 10:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:17.403 10:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ nfop7wba4x6bdp8c4vgqt8c7si89d8yiqlhgf5dri365lkmz881x6nhsr0ykk3v13olpoqnrso1o5z06bckdktxkwt0njz66g7h40sqmqbrj85hqunzbywh3bu2yacmvlsrjgtpuivqr4ht6vfsf4o6kw2gjnl5my9z0fnpmvk7hfr69khd5ldr0wjsarn1uqcyoox3u3c29wbkf4d993maxep8hgdxe9s6rp6y0p71vymli29vodykoryym7bqebf3xjucoolc8x6lt6fs8csdcsod86zg854dkarpxodetp2su286gysmha43jp9f25ze19nvgs9y4a488qq6atlr2uilmmxjdas7s1clr4bevn5sr9f2lvrz96ds5inbwqs8qjmbhlcb4fo5rqea7pkgz72asgwsqov7d5tie1lu4i23yz5ia5esvv1io99ncgi7rkrtq6jxnz6f5nld86e5204lvaydqigiisbguvwcppds769q8lqr25yi993ljbhls9yc7l3vz7xm1s03fg4757oqe8g3eio16swofn8ktk7f426jg3hs5ox22gm79nf4g2zzsxgqreq27ms5qcie8oswa351s65zbwupx2kata50m3w9f3tx5izc0odulsr5ml4bpoq44v4t8duz32k0efj0j5ywvd10c8m4u8jgk6knopofyiqiebci1okrlounixxx400i548kl0qwu1cirym4jwdeo9eyvyqmxy4dpypjwns4mkt14f0s3j77vagi5m0llr86aqgzkk7xmk0gqcp5j5y8ddaje73mmh0si44xjbwhxfdmumxchebpdsre099jt2cf607x1gvhm6hvjwc1l6l0uj3y3j7w93ilh9wuwqzax0cydrewb2mkylln33i62qla259lz6oz3izoi2raokrcogywje55h2hxuguu1yhq0sj3yog5n11rhqjdknkb60419lmgcgcpt61o3jqbobenvx4m83xbxebth608p1myww7oad4r9kf6ubatlydljbfd9jnd71wc5ll4fdlifr7pv8ucuu6gp16v5fih72hfiz5h0ffcwh9libgrvztjyy3o29mdnjm38gq8sby6v2kwcet7qt67qu4ru5o2oo0bcrc998u3upbhp19qwz7piesoj2nxyqehw7hw0a3eiyu4birciv6vj6851mympk54uxhdct0c5pukm9u44a6ogwm5j6rci0j1f2szlr88s8q5d26tezg0ehfvx32f0v99neflkh5akvm82227u1vz62rjxwy94rcmpuajuhvkax36ts23e62kz2as5g6r5lzs14m8cm6b1uysauc2x7wertza3k3v020gzxgom2g5pdnkr8gn8k8h28d6u50pqwjpt91fjy9otb7qnlvcxz8ickioi9jjppcnomv6113un9uktm62ma93dozkz778gufw9f1ac6fx28m3xzjqc81ba0jocm2tn20unl1t6o12klblikxl9qt1mzkxmt1auec4jvs1rk7r1m2zzxuxp4xxsurwovtfpuwuu3nkvidh4wzvatpqumr6f5exroq0lswdqcn8ptdeud6xxfnjvxf883f2hcbverwyl31u0im3q99qhvy84ow8rjqa2u2qvfvgvllfke9asg6fwvrqsw9qpjp8ikbwdmxlu0mza9hjnlg93mgdwrzzi3njfjgnoaubuazchuyzd6pgm8ffnyjvkp7nx6c44cgu5a1dmoapc32wucsats4tike1qgj1cb2ftyhc6wqqlm4gd1r05dm1tnki75cz10vwu7bsne2tmmu56zunuf0ormvm9h06z9tk7o5z37rmglejx9wxpqtimir8mkpszxjzecczqec9ho5k6188yvyw97r7x3dkoyswf9jljfvlqnli9wbh71knlvfa3jvi1sfux89xyffdqoe6z4qb4g30yz9ma0ss6oqkykiwrghyi36hhcqb517o4330hhhwjmrsffnhiv3zkwlbj5ptv8rjxzxf9q0d53zz1y22zd2wvnl8kiqqrfme59xrtpdtqrogtan0uqwg1k6zcid336o4l2ypmd9opw1m3opa03eg2nndfsu4i09w0ozbi5rcagl6n8gtlpx7h1odo5axp62vjt0xhsw2hx4jrnrz78fjbeix46cr4x3o10qdjaguxlin3ib634xvxrdfztew6bs439k0u5ys1w33lsyjirydyyq32pb24zip6n1zllzatchcog6ika11it2hqvljdvl5jt8gsv6at8572mnjvwgnz4pk5ifkal4z36n3h354om7lirv30e2y7odjn4pevs1bszkwv92a5rq5q3nspghkhp6r3vz7xp6t5gddhm4lcnklss2c2rxfia84wg0l9j5b9j4cxsxin3yd5lnk1p6zh81mtcse3k911c0raertbvg355x3m93hmb82e2tbtqa8imcihh7e27ilur9rk4xuxeef73vqbhny8a1306gsdya5zz0z9rbhqiqnjcmrplbjwlxg1lo30r6a25gt22sc9juegc1wtvfro463c773sypd57nqcx7mf5mfmrsxvooih8ebw9rf41xd5yz97emp1lt38g18d8d55tcg2yg5f769esp8irglmux9f7bq2wut1bfgx7oq5t8xuilv4os6w0wjw0nw0fqzfrfy3hq7x042m5jvyy7cf42fxo6fable6ekbbkqpglm35x9gryeal0tucej1zq8zqg748c37jxvxzr1kwe3e2h38ouznjl4dor2bhwne3edjl03scl15nlgxogf5tq3f9uuopyey3eikwa8kuv87dar1kgfj6m9prm0hulbl8yzqesuj9i124humckcqmhabkodv2zikb1e0b9emoiq8br9ec9was3xu16uo0shqq4lonr7n21027imvtj5sqca7g55t058tpz3ra3c3l0j7y1dutzgyl15zmsx7woys43eblatuawhug3nu6i7qiu4vkinq8gf4jihbjlyp10ghwekgpomz1nlhs1nkna9dkyyslesnt1ex4x03q68q4nrvkemsa92e1alpak9gr7cyztpi8nafjwzldirgkuhfxbgc8aukncx2mnjfdq1wqgj0soc9wflaod9oclxhfm1xfrcipz4hooyosqoirzdccvcmi4auidkab9tvsnmdd2990lnfyjagqjroi9y45v8ee9a4s9o0gfs3wdjdckcou17z7ok84cl75d6ta9kz03zaq3mqajfkcuxh090uhy8v8qa9lhifpdsdalamzn7baflycfv9j4d4dj53049j3a1sci10ygp7e9h863lnm6oxznfjsfppmbgyoqx7hqq04i3s3zmdatmctk4kvfwhkv466fynm5vk4t9yq35slv0m809u1m46lhcio7lor1552f4eigzdroqdg5sg8fvfvleyiznycq98ev18ngp8dxtx4rvnn3ntycv8vegm7lh4vf7fscogk6zmhgbyk0dg5doctyvu6odtdp2ifuxkcfs1jl2614u8hpn5byk6a1uzaviuuo07lz4q0elkpjbeid58j3146ouwfrderpfauxzx2etq9e0aaeav2udeibzah6w0lub18bjc29tz3pwngdfgdpkv4at1b8lapis7vyc9um6dqaxj7bnzgan02qqxgx6lvzscyguvv9u28dw6fzhglv3d13hlkl173ubd7m8i4ld8v6gcor6fp18kj13crxsisgfl0dhz0omlbk7q46ym5pepv48fl9fkoz3lal3w2h69i9ol4sqk57ysxx5lz9tcolxvcg2ii6284sc9hplnwjjvfdgthcc6ngqw2hik57o4rpfowe0iebm91btjtl69netbdelrh6kmmce7v6ozfrpufawebfrhbkrok0vovx0k3bbd452fhsdwxngcee3938emncqgnf17c6hdz7lmnzbz48w42hrip5de6w1vofb6ra510fdszjskva6f4ir2d570x6mulno0lwcatdal50c3w66c6rpfvni7uiqo1pvpdvxq1xcrp66i85qgw7oczkwpsh4lt1fmc == \n\f\o\p\7\w\b\a\4\x\6\b\d\p\8\c\4\v\g\q\t\8\c\7\s\i\8\9\d\8\y\i\q\l\h\g\f\5\d\r\i\3\6\5\l\k\m\z\8\8\1\x\6\n\h\s\r\0\y\k\k\3\v\1\3\o\l\p\o\q\n\r\s\o\1\o\5\z\0\6\b\c\k\d\k\t\x\k\w\t\0\n\j\z\6\6\g\7\h\4\0\s\q\m\q\b\r\j\8\5\h\q\u\n\z\b\y\w\h\3\b\u\2\y\a\c\m\v\l\s\r\j\g\t\p\u\i\v\q\r\4\h\t\6\v\f\s\f\4\o\6\k\w\2\g\j\n\l\5\m\y\9\z\0\f\n\p\m\v\k\7\h\f\r\6\9\k\h\d\5\l\d\r\0\w\j\s\a\r\n\1\u\q\c\y\o\o\x\3\u\3\c\2\9\w\b\k\f\4\d\9\9\3\m\a\x\e\p\8\h\g\d\x\e\9\s\6\r\p\6\y\0\p\7\1\v\y\m\l\i\2\9\v\o\d\y\k\o\r\y\y\m\7\b\q\e\b\f\3\x\j\u\c\o\o\l\c\8\x\6\l\t\6\f\s\8\c\s\d\c\s\o\d\8\6\z\g\8\5\4\d\k\a\r\p\x\o\d\e\t\p\2\s\u\2\8\6\g\y\s\m\h\a\4\3\j\p\9\f\2\5\z\e\1\9\n\v\g\s\9\y\4\a\4\8\8\q\q\6\a\t\l\r\2\u\i\l\m\m\x\j\d\a\s\7\s\1\c\l\r\4\b\e\v\n\5\s\r\9\f\2\l\v\r\z\9\6\d\s\5\i\n\b\w\q\s\8\q\j\m\b\h\l\c\b\4\f\o\5\r\q\e\a\7\p\k\g\z\7\2\a\s\g\w\s\q\o\v\7\d\5\t\i\e\1\l\u\4\i\2\3\y\z\5\i\a\5\e\s\v\v\1\i\o\9\9\n\c\g\i\7\r\k\r\t\q\6\j\x\n\z\6\f\5\n\l\d\8\6\e\5\2\0\4\l\v\a\y\d\q\i\g\i\i\s\b\g\u\v\w\c\p\p\d\s\7\6\9\q\8\l\q\r\2\5\y\i\9\9\3\l\j\b\h\l\s\9\y\c\7\l\3\v\z\7\x\m\1\s\0\3\f\g\4\7\5\7\o\q\e\8\g\3\e\i\o\1\6\s\w\o\f\n\8\k\t\k\7\f\4\2\6\j\g\3\h\s\5\o\x\2\2\g\m\7\9\n\f\4\g\2\z\z\s\x\g\q\r\e\q\2\7\m\s\5\q\c\i\e\8\o\s\w\a\3\5\1\s\6\5\z\b\w\u\p\x\2\k\a\t\a\5\0\m\3\w\9\f\3\t\x\5\i\z\c\0\o\d\u\l\s\r\5\m\l\4\b\p\o\q\4\4\v\4\t\8\d\u\z\3\2\k\0\e\f\j\0\j\5\y\w\v\d\1\0\c\8\m\4\u\8\j\g\k\6\k\n\o\p\o\f\y\i\q\i\e\b\c\i\1\o\k\r\l\o\u\n\i\x\x\x\4\0\0\i\5\4\8\k\l\0\q\w\u\1\c\i\r\y\m\4\j\w\d\e\o\9\e\y\v\y\q\m\x\y\4\d\p\y\p\j\w\n\s\4\m\k\t\1\4\f\0\s\3\j\7\7\v\a\g\i\5\m\0\l\l\r\8\6\a\q\g\z\k\k\7\x\m\k\0\g\q\c\p\5\j\5\y\8\d\d\a\j\e\7\3\m\m\h\0\s\i\4\4\x\j\b\w\h\x\f\d\m\u\m\x\c\h\e\b\p\d\s\r\e\0\9\9\j\t\2\c\f\6\0\7\x\1\g\v\h\m\6\h\v\j\w\c\1\l\6\l\0\u\j\3\y\3\j\7\w\9\3\i\l\h\9\w\u\w\q\z\a\x\0\c\y\d\r\e\w\b\2\m\k\y\l\l\n\3\3\i\6\2\q\l\a\2\5\9\l\z\6\o\z\3\i\z\o\i\2\r\a\o\k\r\c\o\g\y\w\j\e\5\5\h\2\h\x\u\g\u\u\1\y\h\q\0\s\j\3\y\o\g\5\n\1\1\r\h\q\j\d\k\n\k\b\6\0\4\1\9\l\m\g\c\g\c\p\t\6\1\o\3\j\q\b\o\b\e\n\v\x\4\m\8\3\x\b\x\e\b\t\h\6\0\8\p\1\m\y\w\w\7\o\a\d\4\r\9\k\f\6\u\b\a\t\l\y\d\l\j\b\f\d\9\j\n\d\7\1\w\c\5\l\l\4\f\d\l\i\f\r\7\p\v\8\u\c\u\u\6\g\p\1\6\v\5\f\i\h\7\2\h\f\i\z\5\h\0\f\f\c\w\h\9\l\i\b\g\r\v\z\t\j\y\y\3\o\2\9\m\d\n\j\m\3\8\g\q\8\s\b\y\6\v\2\k\w\c\e\t\7\q\t\6\7\q\u\4\r\u\5\o\2\o\o\0\b\c\r\c\9\9\8\u\3\u\p\b\h\p\1\9\q\w\z\7\p\i\e\s\o\j\2\n\x\y\q\e\h\w\7\h\w\0\a\3\e\i\y\u\4\b\i\r\c\i\v\6\v\j\6\8\5\1\m\y\m\p\k\5\4\u\x\h\d\c\t\0\c\5\p\u\k\m\9\u\4\4\a\6\o\g\w\m\5\j\6\r\c\i\0\j\1\f\2\s\z\l\r\8\8\s\8\q\5\d\2\6\t\e\z\g\0\e\h\f\v\x\3\2\f\0\v\9\9\n\e\f\l\k\h\5\a\k\v\m\8\2\2\2\7\u\1\v\z\6\2\r\j\x\w\y\9\4\r\c\m\p\u\a\j\u\h\v\k\a\x\3\6\t\s\2\3\e\6\2\k\z\2\a\s\5\g\6\r\5\l\z\s\1\4\m\8\c\m\6\b\1\u\y\s\a\u\c\2\x\7\w\e\r\t\z\a\3\k\3\v\0\2\0\g\z\x\g\o\m\2\g\5\p\d\n\k\r\8\g\n\8\k\8\h\2\8\d\6\u\5\0\p\q\w\j\p\t\9\1\f\j\y\9\o\t\b\7\q\n\l\v\c\x\z\8\i\c\k\i\o\i\9\j\j\p\p\c\n\o\m\v\6\1\1\3\u\n\9\u\k\t\m\6\2\m\a\9\3\d\o\z\k\z\7\7\8\g\u\f\w\9\f\1\a\c\6\f\x\2\8\m\3\x\z\j\q\c\8\1\b\a\0\j\o\c\m\2\t\n\2\0\u\n\l\1\t\6\o\1\2\k\l\b\l\i\k\x\l\9\q\t\1\m\z\k\x\m\t\1\a\u\e\c\4\j\v\s\1\r\k\7\r\1\m\2\z\z\x\u\x\p\4\x\x\s\u\r\w\o\v\t\f\p\u\w\u\u\3\n\k\v\i\d\h\4\w\z\v\a\t\p\q\u\m\r\6\f\5\e\x\r\o\q\0\l\s\w\d\q\c\n\8\p\t\d\e\u\d\6\x\x\f\n\j\v\x\f\8\8\3\f\2\h\c\b\v\e\r\w\y\l\3\1\u\0\i\m\3\q\9\9\q\h\v\y\8\4\o\w\8\r\j\q\a\2\u\2\q\v\f\v\g\v\l\l\f\k\e\9\a\s\g\6\f\w\v\r\q\s\w\9\q\p\j\p\8\i\k\b\w\d\m\x\l\u\0\m\z\a\9\h\j\n\l\g\9\3\m\g\d\w\r\z\z\i\3\n\j\f\j\g\n\o\a\u\b\u\a\z\c\h\u\y\z\d\6\p\g\m\8\f\f\n\y\j\v\k\p\7\n\x\6\c\4\4\c\g\u\5\a\1\d\m\o\a\p\c\3\2\w\u\c\s\a\t\s\4\t\i\k\e\1\q\g\j\1\c\b\2\f\t\y\h\c\6\w\q\q\l\m\4\g\d\1\r\0\5\d\m\1\t\n\k\i\7\5\c\z\1\0\v\w\u\7\b\s\n\e\2\t\m\m\u\5\6\z\u\n\u\f\0\o\r\m\v\m\9\h\0\6\z\9\t\k\7\o\5\z\3\7\r\m\g\l\e\j\x\9\w\x\p\q\t\i\m\i\r\8\m\k\p\s\z\x\j\z\e\c\c\z\q\e\c\9\h\o\5\k\6\1\8\8\y\v\y\w\9\7\r\7\x\3\d\k\o\y\s\w\f\9\j\l\j\f\v\l\q\n\l\i\9\w\b\h\7\1\k\n\l\v\f\a\3\j\v\i\1\s\f\u\x\8\9\x\y\f\f\d\q\o\e\6\z\4\q\b\4\g\3\0\y\z\9\m\a\0\s\s\6\o\q\k\y\k\i\w\r\g\h\y\i\3\6\h\h\c\q\b\5\1\7\o\4\3\3\0\h\h\h\w\j\m\r\s\f\f\n\h\i\v\3\z\k\w\l\b\j\5\p\t\v\8\r\j\x\z\x\f\9\q\0\d\5\3\z\z\1\y\2\2\z\d\2\w\v\n\l\8\k\i\q\q\r\f\m\e\5\9\x\r\t\p\d\t\q\r\o\g\t\a\n\0\u\q\w\g\1\k\6\z\c\i\d\3\3\6\o\4\l\2\y\p\m\d\9\o\p\w\1\m\3\o\p\a\0\3\e\g\2\n\n\d\f\s\u\4\i\0\9\w\0\o\z\b\i\5\r\c\a\g\l\6\n\8\g\t\l\p\x\7\h\1\o\d\o\5\a\x\p\6\2\v\j\t\0\x\h\s\w\2\h\x\4\j\r\n\r\z\7\8\f\j\b\e\i\x\4\6\c\r\4\x\3\o\1\0\q\d\j\a\g\u\x\l\i\n\3\i\b\6\3\4\x\v\x\r\d\f\z\t\e\w\6\b\s\4\3\9\k\0\u\5\y\s\1\w\3\3\l\s\y\j\i\r\y\d\y\y\q\3\2\p\b\2\4\z\i\p\6\n\1\z\l\l\z\a\t\c\h\c\o\g\6\i\k\a\1\1\i\t\2\h\q\v\l\j\d\v\l\5\j\t\8\g\s\v\6\a\t\8\5\7\2\m\n\j\v\w\g\n\z\4\p\k\5\i\f\k\a\l\4\z\3\6\n\3\h\3\5\4\o\m\7\l\i\r\v\3\0\e\2\y\7\o\d\j\n\4\p\e\v\s\1\b\s\z\k\w\v\9\2\a\5\r\q\5\q\3\n\s\p\g\h\k\h\p\6\r\3\v\z\7\x\p\6\t\5\g\d\d\h\m\4\l\c\n\k\l\s\s\2\c\2\r\x\f\i\a\8\4\w\g\0\l\9\j\5\b\9\j\4\c\x\s\x\i\n\3\y\d\5\l\n\k\1\p\6\z\h\8\1\m\t\c\s\e\3\k\9\1\1\c\0\r\a\e\r\t\b\v\g\3\5\5\x\3\m\9\3\h\m\b\8\2\e\2\t\b\t\q\a\8\i\m\c\i\h\h\7\e\2\7\i\l\u\r\9\r\k\4\x\u\x\e\e\f\7\3\v\q\b\h\n\y\8\a\1\3\0\6\g\s\d\y\a\5\z\z\0\z\9\r\b\h\q\i\q\n\j\c\m\r\p\l\b\j\w\l\x\g\1\l\o\3\0\r\6\a\2\5\g\t\2\2\s\c\9\j\u\e\g\c\1\w\t\v\f\r\o\4\6\3\c\7\7\3\s\y\p\d\5\7\n\q\c\x\7\m\f\5\m\f\m\r\s\x\v\o\o\i\h\8\e\b\w\9\r\f\4\1\x\d\5\y\z\9\7\e\m\p\1\l\t\3\8\g\1\8\d\8\d\5\5\t\c\g\2\y\g\5\f\7\6\9\e\s\p\8\i\r\g\l\m\u\x\9\f\7\b\q\2\w\u\t\1\b\f\g\x\7\o\q\5\t\8\x\u\i\l\v\4\o\s\6\w\0\w\j\w\0\n\w\0\f\q\z\f\r\f\y\3\h\q\7\x\0\4\2\m\5\j\v\y\y\7\c\f\4\2\f\x\o\6\f\a\b\l\e\6\e\k\b\b\k\q\p\g\l\m\3\5\x\9\g\r\y\e\a\l\0\t\u\c\e\j\1\z\q\8\z\q\g\7\4\8\c\3\7\j\x\v\x\z\r\1\k\w\e\3\e\2\h\3\8\o\u\z\n\j\l\4\d\o\r\2\b\h\w\n\e\3\e\d\j\l\0\3\s\c\l\1\5\n\l\g\x\o\g\f\5\t\q\3\f\9\u\u\o\p\y\e\y\3\e\i\k\w\a\8\k\u\v\8\7\d\a\r\1\k\g\f\j\6\m\9\p\r\m\0\h\u\l\b\l\8\y\z\q\e\s\u\j\9\i\1\2\4\h\u\m\c\k\c\q\m\h\a\b\k\o\d\v\2\z\i\k\b\1\e\0\b\9\e\m\o\i\q\8\b\r\9\e\c\9\w\a\s\3\x\u\1\6\u\o\0\s\h\q\q\4\l\o\n\r\7\n\2\1\0\2\7\i\m\v\t\j\5\s\q\c\a\7\g\5\5\t\0\5\8\t\p\z\3\r\a\3\c\3\l\0\j\7\y\1\d\u\t\z\g\y\l\1\5\z\m\s\x\7\w\o\y\s\4\3\e\b\l\a\t\u\a\w\h\u\g\3\n\u\6\i\7\q\i\u\4\v\k\i\n\q\8\g\f\4\j\i\h\b\j\l\y\p\1\0\g\h\w\e\k\g\p\o\m\z\1\n\l\h\s\1\n\k\n\a\9\d\k\y\y\s\l\e\s\n\t\1\e\x\4\x\0\3\q\6\8\q\4\n\r\v\k\e\m\s\a\9\2\e\1\a\l\p\a\k\9\g\r\7\c\y\z\t\p\i\8\n\a\f\j\w\z\l\d\i\r\g\k\u\h\f\x\b\g\c\8\a\u\k\n\c\x\2\m\n\j\f\d\q\1\w\q\g\j\0\s\o\c\9\w\f\l\a\o\d\9\o\c\l\x\h\f\m\1\x\f\r\c\i\p\z\4\h\o\o\y\o\s\q\o\i\r\z\d\c\c\v\c\m\i\4\a\u\i\d\k\a\b\9\t\v\s\n\m\d\d\2\9\9\0\l\n\f\y\j\a\g\q\j\r\o\i\9\y\4\5\v\8\e\e\9\a\4\s\9\o\0\g\f\s\3\w\d\j\d\c\k\c\o\u\1\7\z\7\o\k\8\4\c\l\7\5\d\6\t\a\9\k\z\0\3\z\a\q\3\m\q\a\j\f\k\c\u\x\h\0\9\0\u\h\y\8\v\8\q\a\9\l\h\i\f\p\d\s\d\a\l\a\m\z\n\7\b\a\f\l\y\c\f\v\9\j\4\d\4\d\j\5\3\0\4\9\j\3\a\1\s\c\i\1\0\y\g\p\7\e\9\h\8\6\3\l\n\m\6\o\x\z\n\f\j\s\f\p\p\m\b\g\y\o\q\x\7\h\q\q\0\4\i\3\s\3\z\m\d\a\t\m\c\t\k\4\k\v\f\w\h\k\v\4\6\6\f\y\n\m\5\v\k\4\t\9\y\q\3\5\s\l\v\0\m\8\0\9\u\1\m\4\6\l\h\c\i\o\7\l\o\r\1\5\5\2\f\4\e\i\g\z\d\r\o\q\d\g\5\s\g\8\f\v\f\v\l\e\y\i\z\n\y\c\q\9\8\e\v\1\8\n\g\p\8\d\x\t\x\4\r\v\n\n\3\n\t\y\c\v\8\v\e\g\m\7\l\h\4\v\f\7\f\s\c\o\g\k\6\z\m\h\g\b\y\k\0\d\g\5\d\o\c\t\y\v\u\6\o\d\t\d\p\2\i\f\u\x\k\c\f\s\1\j\l\2\6\1\4\u\8\h\p\n\5\b\y\k\6\a\1\u\z\a\v\i\u\u\o\0\7\l\z\4\q\0\e\l\k\p\j\b\e\i\d\5\8\j\3\1\4\6\o\u\w\f\r\d\e\r\p\f\a\u\x\z\x\2\e\t\q\9\e\0\a\a\e\a\v\2\u\d\e\i\b\z\a\h\6\w\0\l\u\b\1\8\b\j\c\2\9\t\z\3\p\w\n\g\d\f\g\d\p\k\v\4\a\t\1\b\8\l\a\p\i\s\7\v\y\c\9\u\m\6\d\q\a\x\j\7\b\n\z\g\a\n\0\2\q\q\x\g\x\6\l\v\z\s\c\y\g\u\v\v\9\u\2\8\d\w\6\f\z\h\g\l\v\3\d\1\3\h\l\k\l\1\7\3\u\b\d\7\m\8\i\4\l\d\8\v\6\g\c\o\r\6\f\p\1\8\k\j\1\3\c\r\x\s\i\s\g\f\l\0\d\h\z\0\o\m\l\b\k\7\q\4\6\y\m\5\p\e\p\v\4\8\f\l\9\f\k\o\z\3\l\a\l\3\w\2\h\6\9\i\9\o\l\4\s\q\k\5\7\y\s\x\x\5\l\z\9\t\c\o\l\x\v\c\g\2\i\i\6\2\8\4\s\c\9\h\p\l\n\w\j\j\v\f\d\g\t\h\c\c\6\n\g\q\w\2\h\i\k\5\7\o\4\r\p\f\o\w\e\0\i\e\b\m\9\1\b\t\j\t\l\6\9\n\e\t\b\d\e\l\r\h\6\k\m\m\c\e\7\v\6\o\z\f\r\p\u\f\a\w\e\b\f\r\h\b\k\r\o\k\0\v\o\v\x\0\k\3\b\b\d\4\5\2\f\h\s\d\w\x\n\g\c\e\e\3\9\3\8\e\m\n\c\q\g\n\f\1\7\c\6\h\d\z\7\l\m\n\z\b\z\4\8\w\4\2\h\r\i\p\5\d\e\6\w\1\v\o\f\b\6\r\a\5\1\0\f\d\s\z\j\s\k\v\a\6\f\4\i\r\2\d\5\7\0\x\6\m\u\l\n\o\0\l\w\c\a\t\d\a\l\5\0\c\3\w\6\6\c\6\r\p\f\v\n\i\7\u\i\q\o\1\p\v\p\d\v\x\q\1\x\c\r\p\6\6\i\8\5\q\g\w\7\o\c\z\k\w\p\s\h\4\l\t\1\f\m\c ]] 00:06:17.403 00:06:17.403 real 0m1.012s 00:06:17.403 user 0m0.702s 00:06:17.403 sys 0m0.372s 00:06:17.403 10:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.403 10:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:17.403 ************************************ 00:06:17.403 END TEST dd_rw_offset 00:06:17.403 ************************************ 00:06:17.403 10:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:17.403 10:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:17.403 10:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:17.403 10:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:17.403 10:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:17.404 10:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:17.404 10:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:17.404 10:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:17.404 10:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:17.404 10:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.404 10:28:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.404 [2024-11-12 10:28:06.005368] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:17.404 [2024-11-12 10:28:06.005454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59855 ] 00:06:17.404 { 00:06:17.404 "subsystems": [ 00:06:17.404 { 00:06:17.404 "subsystem": "bdev", 00:06:17.404 "config": [ 00:06:17.404 { 00:06:17.404 "params": { 00:06:17.404 "trtype": "pcie", 00:06:17.404 "traddr": "0000:00:10.0", 00:06:17.404 "name": "Nvme0" 00:06:17.404 }, 00:06:17.404 "method": "bdev_nvme_attach_controller" 00:06:17.404 }, 00:06:17.404 { 00:06:17.404 "method": "bdev_wait_for_examine" 00:06:17.404 } 00:06:17.404 ] 00:06:17.404 } 00:06:17.404 ] 00:06:17.404 } 00:06:17.404 [2024-11-12 10:28:06.145465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.663 [2024-11-12 10:28:06.173705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.663 [2024-11-12 10:28:06.200923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.663  [2024-11-12T10:28:06.680Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:17.922 00:06:17.922 10:28:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.922 ************************************ 00:06:17.922 END TEST spdk_dd_basic_rw 00:06:17.922 ************************************ 00:06:17.922 00:06:17.922 real 0m14.388s 00:06:17.922 user 0m10.450s 00:06:17.922 sys 0m4.424s 00:06:17.922 10:28:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.922 10:28:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.922 10:28:06 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:17.922 10:28:06 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.922 10:28:06 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.922 10:28:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:17.922 ************************************ 00:06:17.922 START TEST spdk_dd_posix 00:06:17.922 ************************************ 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:17.922 * Looking for test storage... 00:06:17.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:17.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.922 --rc genhtml_branch_coverage=1 00:06:17.922 --rc genhtml_function_coverage=1 00:06:17.922 --rc genhtml_legend=1 00:06:17.922 --rc geninfo_all_blocks=1 00:06:17.922 --rc geninfo_unexecuted_blocks=1 00:06:17.922 00:06:17.922 ' 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:17.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.922 --rc genhtml_branch_coverage=1 00:06:17.922 --rc genhtml_function_coverage=1 00:06:17.922 --rc genhtml_legend=1 00:06:17.922 --rc geninfo_all_blocks=1 00:06:17.922 --rc geninfo_unexecuted_blocks=1 00:06:17.922 00:06:17.922 ' 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:17.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.922 --rc genhtml_branch_coverage=1 00:06:17.922 --rc genhtml_function_coverage=1 00:06:17.922 --rc genhtml_legend=1 00:06:17.922 --rc geninfo_all_blocks=1 00:06:17.922 --rc geninfo_unexecuted_blocks=1 00:06:17.922 00:06:17.922 ' 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:17.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.922 --rc genhtml_branch_coverage=1 00:06:17.922 --rc genhtml_function_coverage=1 00:06:17.922 --rc genhtml_legend=1 00:06:17.922 --rc geninfo_all_blocks=1 00:06:17.922 --rc geninfo_unexecuted_blocks=1 00:06:17.922 00:06:17.922 ' 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.922 10:28:06 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:17.923 * First test run, liburing in use 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:17.923 ************************************ 00:06:17.923 START TEST dd_flag_append 00:06:17.923 ************************************ 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1127 -- # append 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:17.923 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:18.182 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=1ml4m77m879whthjd1xmvnv1fr2l8mvk 00:06:18.182 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:18.182 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:18.182 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:18.182 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=8d68rju5uaxv9hcloe3uvdtoxqcqwi3s 00:06:18.182 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 1ml4m77m879whthjd1xmvnv1fr2l8mvk 00:06:18.182 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 8d68rju5uaxv9hcloe3uvdtoxqcqwi3s 00:06:18.182 10:28:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:18.182 [2024-11-12 10:28:06.736892] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:18.182 [2024-11-12 10:28:06.737141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59922 ] 00:06:18.182 [2024-11-12 10:28:06.882960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.182 [2024-11-12 10:28:06.911045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.182 [2024-11-12 10:28:06.938435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.441  [2024-11-12T10:28:07.199Z] Copying: 32/32 [B] (average 31 kBps) 00:06:18.441 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 8d68rju5uaxv9hcloe3uvdtoxqcqwi3s1ml4m77m879whthjd1xmvnv1fr2l8mvk == \8\d\6\8\r\j\u\5\u\a\x\v\9\h\c\l\o\e\3\u\v\d\t\o\x\q\c\q\w\i\3\s\1\m\l\4\m\7\7\m\8\7\9\w\h\t\h\j\d\1\x\m\v\n\v\1\f\r\2\l\8\m\v\k ]] 00:06:18.441 00:06:18.441 real 0m0.391s 00:06:18.441 user 0m0.195s 00:06:18.441 sys 0m0.161s 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.441 ************************************ 00:06:18.441 END TEST dd_flag_append 00:06:18.441 ************************************ 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.441 ************************************ 00:06:18.441 START TEST dd_flag_directory 00:06:18.441 ************************************ 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1127 -- # directory 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.441 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.442 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.442 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.442 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.442 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.442 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.442 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:18.442 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:18.442 [2024-11-12 10:28:07.172257] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:18.442 [2024-11-12 10:28:07.172334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59956 ] 00:06:18.700 [2024-11-12 10:28:07.302461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.700 [2024-11-12 10:28:07.330779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.700 [2024-11-12 10:28:07.357695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.701 [2024-11-12 10:28:07.376611] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:18.701 [2024-11-12 10:28:07.376665] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:18.701 [2024-11-12 10:28:07.376721] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.701 [2024-11-12 10:28:07.438627] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:18.960 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:18.960 [2024-11-12 10:28:07.560487] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:18.960 [2024-11-12 10:28:07.560590] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59960 ] 00:06:18.960 [2024-11-12 10:28:07.704916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.219 [2024-11-12 10:28:07.733189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.219 [2024-11-12 10:28:07.759882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.219 [2024-11-12 10:28:07.778794] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:19.219 [2024-11-12 10:28:07.778873] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:19.219 [2024-11-12 10:28:07.778907] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.219 [2024-11-12 10:28:07.841416] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:19.219 ************************************ 00:06:19.219 END TEST dd_flag_directory 00:06:19.219 ************************************ 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.219 00:06:19.219 real 0m0.779s 00:06:19.219 user 0m0.396s 00:06:19.219 sys 0m0.176s 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:19.219 ************************************ 00:06:19.219 START TEST dd_flag_nofollow 00:06:19.219 ************************************ 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1127 -- # nofollow 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:19.219 10:28:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.479 [2024-11-12 10:28:08.012366] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:19.479 [2024-11-12 10:28:08.012454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59988 ] 00:06:19.479 [2024-11-12 10:28:08.155276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.479 [2024-11-12 10:28:08.182625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.479 [2024-11-12 10:28:08.209349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.479 [2024-11-12 10:28:08.226451] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:19.479 [2024-11-12 10:28:08.226500] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:19.479 [2024-11-12 10:28:08.226533] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.738 [2024-11-12 10:28:08.286376] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:19.738 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:19.738 [2024-11-12 10:28:08.410964] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:19.738 [2024-11-12 10:28:08.411061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59998 ] 00:06:19.997 [2024-11-12 10:28:08.557539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.997 [2024-11-12 10:28:08.585377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.997 [2024-11-12 10:28:08.612531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.997 [2024-11-12 10:28:08.630067] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:19.997 [2024-11-12 10:28:08.630120] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:19.997 [2024-11-12 10:28:08.630154] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.997 [2024-11-12 10:28:08.688723] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:19.997 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:19.997 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.997 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:19.997 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:19.997 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:19.997 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.997 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:19.997 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:19.997 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:20.256 10:28:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.256 [2024-11-12 10:28:08.798520] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:20.256 [2024-11-12 10:28:08.798602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60000 ] 00:06:20.256 [2024-11-12 10:28:08.937449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.256 [2024-11-12 10:28:08.969669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.256 [2024-11-12 10:28:08.997285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.515  [2024-11-12T10:28:09.273Z] Copying: 512/512 [B] (average 500 kBps) 00:06:20.515 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ cbpqfctzf6o7hy3e7z06a81t2w3vupxrgtpjbxixbns68l5ofb84kfkhbgrwy3e16m8116wad072253ux1lh68cxgurt4ob9safy9j833nnydtvufpms5dwx0mgs2vi333pmp1gzl5us7enzk1vui64h2sz95pkf39t9w38892cl3rvt5uzh8pdea02r2daj5myowwpu1sttao94m5zxjod5guhg5vces1zdq4c2uv9f2nqg6fw9xcim1e8wvveesv98a96ypzzybo7s698apai2x8ppvxwmxesm0d91o6mtcfvalckqsqyxgmb4o3y0o0ijjtayzjd3ct2z2yadi4pikqjzuvvyqdstpwriq30k16ldlmjzn6akfkqtdc2s6l2w0okcyo10d6jil2avm9w9jmsou4scae6g2psp54mbdc1d6avq1cc5njc955y180tv4rko300pohypnaeeq8p7imx14rq85a2dorcjtda0raymp2iuxxu13i9kkoac == \c\b\p\q\f\c\t\z\f\6\o\7\h\y\3\e\7\z\0\6\a\8\1\t\2\w\3\v\u\p\x\r\g\t\p\j\b\x\i\x\b\n\s\6\8\l\5\o\f\b\8\4\k\f\k\h\b\g\r\w\y\3\e\1\6\m\8\1\1\6\w\a\d\0\7\2\2\5\3\u\x\1\l\h\6\8\c\x\g\u\r\t\4\o\b\9\s\a\f\y\9\j\8\3\3\n\n\y\d\t\v\u\f\p\m\s\5\d\w\x\0\m\g\s\2\v\i\3\3\3\p\m\p\1\g\z\l\5\u\s\7\e\n\z\k\1\v\u\i\6\4\h\2\s\z\9\5\p\k\f\3\9\t\9\w\3\8\8\9\2\c\l\3\r\v\t\5\u\z\h\8\p\d\e\a\0\2\r\2\d\a\j\5\m\y\o\w\w\p\u\1\s\t\t\a\o\9\4\m\5\z\x\j\o\d\5\g\u\h\g\5\v\c\e\s\1\z\d\q\4\c\2\u\v\9\f\2\n\q\g\6\f\w\9\x\c\i\m\1\e\8\w\v\v\e\e\s\v\9\8\a\9\6\y\p\z\z\y\b\o\7\s\6\9\8\a\p\a\i\2\x\8\p\p\v\x\w\m\x\e\s\m\0\d\9\1\o\6\m\t\c\f\v\a\l\c\k\q\s\q\y\x\g\m\b\4\o\3\y\0\o\0\i\j\j\t\a\y\z\j\d\3\c\t\2\z\2\y\a\d\i\4\p\i\k\q\j\z\u\v\v\y\q\d\s\t\p\w\r\i\q\3\0\k\1\6\l\d\l\m\j\z\n\6\a\k\f\k\q\t\d\c\2\s\6\l\2\w\0\o\k\c\y\o\1\0\d\6\j\i\l\2\a\v\m\9\w\9\j\m\s\o\u\4\s\c\a\e\6\g\2\p\s\p\5\4\m\b\d\c\1\d\6\a\v\q\1\c\c\5\n\j\c\9\5\5\y\1\8\0\t\v\4\r\k\o\3\0\0\p\o\h\y\p\n\a\e\e\q\8\p\7\i\m\x\1\4\r\q\8\5\a\2\d\o\r\c\j\t\d\a\0\r\a\y\m\p\2\i\u\x\x\u\1\3\i\9\k\k\o\a\c ]] 00:06:20.516 00:06:20.516 real 0m1.193s 00:06:20.516 user 0m0.619s 00:06:20.516 sys 0m0.331s 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:20.516 ************************************ 00:06:20.516 END TEST dd_flag_nofollow 00:06:20.516 ************************************ 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:20.516 ************************************ 00:06:20.516 START TEST dd_flag_noatime 00:06:20.516 ************************************ 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1127 -- # noatime 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731407289 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731407289 00:06:20.516 10:28:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:21.893 10:28:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.893 [2024-11-12 10:28:10.263863] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:21.893 [2024-11-12 10:28:10.263966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60048 ] 00:06:21.893 [2024-11-12 10:28:10.414116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.893 [2024-11-12 10:28:10.452527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.893 [2024-11-12 10:28:10.485383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.893  [2024-11-12T10:28:10.651Z] Copying: 512/512 [B] (average 500 kBps) 00:06:21.893 00:06:21.893 10:28:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.893 10:28:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731407289 )) 00:06:21.893 10:28:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.893 10:28:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731407289 )) 00:06:21.893 10:28:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.152 [2024-11-12 10:28:10.704094] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:22.152 [2024-11-12 10:28:10.704214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:06:22.152 [2024-11-12 10:28:10.849857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.152 [2024-11-12 10:28:10.880327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.411 [2024-11-12 10:28:10.912372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.411  [2024-11-12T10:28:11.169Z] Copying: 512/512 [B] (average 500 kBps) 00:06:22.411 00:06:22.411 10:28:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:22.411 10:28:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731407290 )) 00:06:22.411 00:06:22.411 real 0m1.875s 00:06:22.411 user 0m0.460s 00:06:22.411 sys 0m0.373s 00:06:22.411 10:28:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.411 ************************************ 00:06:22.411 10:28:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:22.411 END TEST dd_flag_noatime 00:06:22.411 ************************************ 00:06:22.411 10:28:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:22.411 10:28:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:22.411 10:28:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.412 10:28:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:22.412 ************************************ 00:06:22.412 START TEST dd_flags_misc 00:06:22.412 ************************************ 00:06:22.412 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1127 -- # io 00:06:22.412 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:22.412 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:22.412 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:22.412 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:22.412 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:22.412 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:22.412 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:22.412 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.412 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:22.412 [2024-11-12 10:28:11.165255] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:22.412 [2024-11-12 10:28:11.165386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60084 ] 00:06:22.670 [2024-11-12 10:28:11.304714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.670 [2024-11-12 10:28:11.334543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.670 [2024-11-12 10:28:11.361560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.670  [2024-11-12T10:28:11.687Z] Copying: 512/512 [B] (average 500 kBps) 00:06:22.929 00:06:22.929 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ bgs2reh7dbab37macte81s5m0r0npwv4zjt6wdyah2jptl9zcg3k05dkb3cjsymdd4urhk9ne813x79ydz7178t2bgk6cubn3qfp8y0r8603fo80fucnv8krzgisp60iqvkny1ltg15kd3ct8lz7rhglsmjez1h062y20siw95uh25bhqgwpxd8lncjbla1scox422pv2dstpwhulmunbf9ffdymrqnqkyvebw1b2j5kllvlkzj93vckpw8ae26mkmqougf33ln1651mzu9qgbaoysla6k23skd2omwpwbz6xwykzzfuj2tepznt3o8ulvouc42da1l7facxlqyvv5wbi82j86ewmznnw5nhfqp9popo0oq8jseior708ui68ta86ixbs44jrwquehohm2q8ytn9z9nsy2w42uisisgif5bw920pix5gzac93nfovycvvig72tnlc9pqj7e39bmguyty9b4rtcm33fwejfuktq66cpq6vq69acm0h319 == \b\g\s\2\r\e\h\7\d\b\a\b\3\7\m\a\c\t\e\8\1\s\5\m\0\r\0\n\p\w\v\4\z\j\t\6\w\d\y\a\h\2\j\p\t\l\9\z\c\g\3\k\0\5\d\k\b\3\c\j\s\y\m\d\d\4\u\r\h\k\9\n\e\8\1\3\x\7\9\y\d\z\7\1\7\8\t\2\b\g\k\6\c\u\b\n\3\q\f\p\8\y\0\r\8\6\0\3\f\o\8\0\f\u\c\n\v\8\k\r\z\g\i\s\p\6\0\i\q\v\k\n\y\1\l\t\g\1\5\k\d\3\c\t\8\l\z\7\r\h\g\l\s\m\j\e\z\1\h\0\6\2\y\2\0\s\i\w\9\5\u\h\2\5\b\h\q\g\w\p\x\d\8\l\n\c\j\b\l\a\1\s\c\o\x\4\2\2\p\v\2\d\s\t\p\w\h\u\l\m\u\n\b\f\9\f\f\d\y\m\r\q\n\q\k\y\v\e\b\w\1\b\2\j\5\k\l\l\v\l\k\z\j\9\3\v\c\k\p\w\8\a\e\2\6\m\k\m\q\o\u\g\f\3\3\l\n\1\6\5\1\m\z\u\9\q\g\b\a\o\y\s\l\a\6\k\2\3\s\k\d\2\o\m\w\p\w\b\z\6\x\w\y\k\z\z\f\u\j\2\t\e\p\z\n\t\3\o\8\u\l\v\o\u\c\4\2\d\a\1\l\7\f\a\c\x\l\q\y\v\v\5\w\b\i\8\2\j\8\6\e\w\m\z\n\n\w\5\n\h\f\q\p\9\p\o\p\o\0\o\q\8\j\s\e\i\o\r\7\0\8\u\i\6\8\t\a\8\6\i\x\b\s\4\4\j\r\w\q\u\e\h\o\h\m\2\q\8\y\t\n\9\z\9\n\s\y\2\w\4\2\u\i\s\i\s\g\i\f\5\b\w\9\2\0\p\i\x\5\g\z\a\c\9\3\n\f\o\v\y\c\v\v\i\g\7\2\t\n\l\c\9\p\q\j\7\e\3\9\b\m\g\u\y\t\y\9\b\4\r\t\c\m\3\3\f\w\e\j\f\u\k\t\q\6\6\c\p\q\6\v\q\6\9\a\c\m\0\h\3\1\9 ]] 00:06:22.929 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.929 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:22.929 [2024-11-12 10:28:11.550986] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:22.930 [2024-11-12 10:28:11.551080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60094 ] 00:06:23.188 [2024-11-12 10:28:11.697727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.188 [2024-11-12 10:28:11.727765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.189 [2024-11-12 10:28:11.754955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.189  [2024-11-12T10:28:11.947Z] Copying: 512/512 [B] (average 500 kBps) 00:06:23.189 00:06:23.189 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ bgs2reh7dbab37macte81s5m0r0npwv4zjt6wdyah2jptl9zcg3k05dkb3cjsymdd4urhk9ne813x79ydz7178t2bgk6cubn3qfp8y0r8603fo80fucnv8krzgisp60iqvkny1ltg15kd3ct8lz7rhglsmjez1h062y20siw95uh25bhqgwpxd8lncjbla1scox422pv2dstpwhulmunbf9ffdymrqnqkyvebw1b2j5kllvlkzj93vckpw8ae26mkmqougf33ln1651mzu9qgbaoysla6k23skd2omwpwbz6xwykzzfuj2tepznt3o8ulvouc42da1l7facxlqyvv5wbi82j86ewmznnw5nhfqp9popo0oq8jseior708ui68ta86ixbs44jrwquehohm2q8ytn9z9nsy2w42uisisgif5bw920pix5gzac93nfovycvvig72tnlc9pqj7e39bmguyty9b4rtcm33fwejfuktq66cpq6vq69acm0h319 == \b\g\s\2\r\e\h\7\d\b\a\b\3\7\m\a\c\t\e\8\1\s\5\m\0\r\0\n\p\w\v\4\z\j\t\6\w\d\y\a\h\2\j\p\t\l\9\z\c\g\3\k\0\5\d\k\b\3\c\j\s\y\m\d\d\4\u\r\h\k\9\n\e\8\1\3\x\7\9\y\d\z\7\1\7\8\t\2\b\g\k\6\c\u\b\n\3\q\f\p\8\y\0\r\8\6\0\3\f\o\8\0\f\u\c\n\v\8\k\r\z\g\i\s\p\6\0\i\q\v\k\n\y\1\l\t\g\1\5\k\d\3\c\t\8\l\z\7\r\h\g\l\s\m\j\e\z\1\h\0\6\2\y\2\0\s\i\w\9\5\u\h\2\5\b\h\q\g\w\p\x\d\8\l\n\c\j\b\l\a\1\s\c\o\x\4\2\2\p\v\2\d\s\t\p\w\h\u\l\m\u\n\b\f\9\f\f\d\y\m\r\q\n\q\k\y\v\e\b\w\1\b\2\j\5\k\l\l\v\l\k\z\j\9\3\v\c\k\p\w\8\a\e\2\6\m\k\m\q\o\u\g\f\3\3\l\n\1\6\5\1\m\z\u\9\q\g\b\a\o\y\s\l\a\6\k\2\3\s\k\d\2\o\m\w\p\w\b\z\6\x\w\y\k\z\z\f\u\j\2\t\e\p\z\n\t\3\o\8\u\l\v\o\u\c\4\2\d\a\1\l\7\f\a\c\x\l\q\y\v\v\5\w\b\i\8\2\j\8\6\e\w\m\z\n\n\w\5\n\h\f\q\p\9\p\o\p\o\0\o\q\8\j\s\e\i\o\r\7\0\8\u\i\6\8\t\a\8\6\i\x\b\s\4\4\j\r\w\q\u\e\h\o\h\m\2\q\8\y\t\n\9\z\9\n\s\y\2\w\4\2\u\i\s\i\s\g\i\f\5\b\w\9\2\0\p\i\x\5\g\z\a\c\9\3\n\f\o\v\y\c\v\v\i\g\7\2\t\n\l\c\9\p\q\j\7\e\3\9\b\m\g\u\y\t\y\9\b\4\r\t\c\m\3\3\f\w\e\j\f\u\k\t\q\6\6\c\p\q\6\v\q\6\9\a\c\m\0\h\3\1\9 ]] 00:06:23.189 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.189 10:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:23.189 [2024-11-12 10:28:11.934062] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:23.189 [2024-11-12 10:28:11.934161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60098 ] 00:06:23.448 [2024-11-12 10:28:12.082817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.448 [2024-11-12 10:28:12.111254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.448 [2024-11-12 10:28:12.138664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.448  [2024-11-12T10:28:12.503Z] Copying: 512/512 [B] (average 166 kBps) 00:06:23.745 00:06:23.745 10:28:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ bgs2reh7dbab37macte81s5m0r0npwv4zjt6wdyah2jptl9zcg3k05dkb3cjsymdd4urhk9ne813x79ydz7178t2bgk6cubn3qfp8y0r8603fo80fucnv8krzgisp60iqvkny1ltg15kd3ct8lz7rhglsmjez1h062y20siw95uh25bhqgwpxd8lncjbla1scox422pv2dstpwhulmunbf9ffdymrqnqkyvebw1b2j5kllvlkzj93vckpw8ae26mkmqougf33ln1651mzu9qgbaoysla6k23skd2omwpwbz6xwykzzfuj2tepznt3o8ulvouc42da1l7facxlqyvv5wbi82j86ewmznnw5nhfqp9popo0oq8jseior708ui68ta86ixbs44jrwquehohm2q8ytn9z9nsy2w42uisisgif5bw920pix5gzac93nfovycvvig72tnlc9pqj7e39bmguyty9b4rtcm33fwejfuktq66cpq6vq69acm0h319 == \b\g\s\2\r\e\h\7\d\b\a\b\3\7\m\a\c\t\e\8\1\s\5\m\0\r\0\n\p\w\v\4\z\j\t\6\w\d\y\a\h\2\j\p\t\l\9\z\c\g\3\k\0\5\d\k\b\3\c\j\s\y\m\d\d\4\u\r\h\k\9\n\e\8\1\3\x\7\9\y\d\z\7\1\7\8\t\2\b\g\k\6\c\u\b\n\3\q\f\p\8\y\0\r\8\6\0\3\f\o\8\0\f\u\c\n\v\8\k\r\z\g\i\s\p\6\0\i\q\v\k\n\y\1\l\t\g\1\5\k\d\3\c\t\8\l\z\7\r\h\g\l\s\m\j\e\z\1\h\0\6\2\y\2\0\s\i\w\9\5\u\h\2\5\b\h\q\g\w\p\x\d\8\l\n\c\j\b\l\a\1\s\c\o\x\4\2\2\p\v\2\d\s\t\p\w\h\u\l\m\u\n\b\f\9\f\f\d\y\m\r\q\n\q\k\y\v\e\b\w\1\b\2\j\5\k\l\l\v\l\k\z\j\9\3\v\c\k\p\w\8\a\e\2\6\m\k\m\q\o\u\g\f\3\3\l\n\1\6\5\1\m\z\u\9\q\g\b\a\o\y\s\l\a\6\k\2\3\s\k\d\2\o\m\w\p\w\b\z\6\x\w\y\k\z\z\f\u\j\2\t\e\p\z\n\t\3\o\8\u\l\v\o\u\c\4\2\d\a\1\l\7\f\a\c\x\l\q\y\v\v\5\w\b\i\8\2\j\8\6\e\w\m\z\n\n\w\5\n\h\f\q\p\9\p\o\p\o\0\o\q\8\j\s\e\i\o\r\7\0\8\u\i\6\8\t\a\8\6\i\x\b\s\4\4\j\r\w\q\u\e\h\o\h\m\2\q\8\y\t\n\9\z\9\n\s\y\2\w\4\2\u\i\s\i\s\g\i\f\5\b\w\9\2\0\p\i\x\5\g\z\a\c\9\3\n\f\o\v\y\c\v\v\i\g\7\2\t\n\l\c\9\p\q\j\7\e\3\9\b\m\g\u\y\t\y\9\b\4\r\t\c\m\3\3\f\w\e\j\f\u\k\t\q\6\6\c\p\q\6\v\q\6\9\a\c\m\0\h\3\1\9 ]] 00:06:23.745 10:28:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.745 10:28:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:23.745 [2024-11-12 10:28:12.339679] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:23.745 [2024-11-12 10:28:12.340302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60113 ] 00:06:23.745 [2024-11-12 10:28:12.485454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.034 [2024-11-12 10:28:12.520540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.034 [2024-11-12 10:28:12.547831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.034  [2024-11-12T10:28:12.792Z] Copying: 512/512 [B] (average 250 kBps) 00:06:24.034 00:06:24.034 10:28:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ bgs2reh7dbab37macte81s5m0r0npwv4zjt6wdyah2jptl9zcg3k05dkb3cjsymdd4urhk9ne813x79ydz7178t2bgk6cubn3qfp8y0r8603fo80fucnv8krzgisp60iqvkny1ltg15kd3ct8lz7rhglsmjez1h062y20siw95uh25bhqgwpxd8lncjbla1scox422pv2dstpwhulmunbf9ffdymrqnqkyvebw1b2j5kllvlkzj93vckpw8ae26mkmqougf33ln1651mzu9qgbaoysla6k23skd2omwpwbz6xwykzzfuj2tepznt3o8ulvouc42da1l7facxlqyvv5wbi82j86ewmznnw5nhfqp9popo0oq8jseior708ui68ta86ixbs44jrwquehohm2q8ytn9z9nsy2w42uisisgif5bw920pix5gzac93nfovycvvig72tnlc9pqj7e39bmguyty9b4rtcm33fwejfuktq66cpq6vq69acm0h319 == \b\g\s\2\r\e\h\7\d\b\a\b\3\7\m\a\c\t\e\8\1\s\5\m\0\r\0\n\p\w\v\4\z\j\t\6\w\d\y\a\h\2\j\p\t\l\9\z\c\g\3\k\0\5\d\k\b\3\c\j\s\y\m\d\d\4\u\r\h\k\9\n\e\8\1\3\x\7\9\y\d\z\7\1\7\8\t\2\b\g\k\6\c\u\b\n\3\q\f\p\8\y\0\r\8\6\0\3\f\o\8\0\f\u\c\n\v\8\k\r\z\g\i\s\p\6\0\i\q\v\k\n\y\1\l\t\g\1\5\k\d\3\c\t\8\l\z\7\r\h\g\l\s\m\j\e\z\1\h\0\6\2\y\2\0\s\i\w\9\5\u\h\2\5\b\h\q\g\w\p\x\d\8\l\n\c\j\b\l\a\1\s\c\o\x\4\2\2\p\v\2\d\s\t\p\w\h\u\l\m\u\n\b\f\9\f\f\d\y\m\r\q\n\q\k\y\v\e\b\w\1\b\2\j\5\k\l\l\v\l\k\z\j\9\3\v\c\k\p\w\8\a\e\2\6\m\k\m\q\o\u\g\f\3\3\l\n\1\6\5\1\m\z\u\9\q\g\b\a\o\y\s\l\a\6\k\2\3\s\k\d\2\o\m\w\p\w\b\z\6\x\w\y\k\z\z\f\u\j\2\t\e\p\z\n\t\3\o\8\u\l\v\o\u\c\4\2\d\a\1\l\7\f\a\c\x\l\q\y\v\v\5\w\b\i\8\2\j\8\6\e\w\m\z\n\n\w\5\n\h\f\q\p\9\p\o\p\o\0\o\q\8\j\s\e\i\o\r\7\0\8\u\i\6\8\t\a\8\6\i\x\b\s\4\4\j\r\w\q\u\e\h\o\h\m\2\q\8\y\t\n\9\z\9\n\s\y\2\w\4\2\u\i\s\i\s\g\i\f\5\b\w\9\2\0\p\i\x\5\g\z\a\c\9\3\n\f\o\v\y\c\v\v\i\g\7\2\t\n\l\c\9\p\q\j\7\e\3\9\b\m\g\u\y\t\y\9\b\4\r\t\c\m\3\3\f\w\e\j\f\u\k\t\q\6\6\c\p\q\6\v\q\6\9\a\c\m\0\h\3\1\9 ]] 00:06:24.034 10:28:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:24.034 10:28:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:24.034 10:28:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:24.034 10:28:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:24.034 10:28:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.034 10:28:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:24.034 [2024-11-12 10:28:12.761249] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:24.034 [2024-11-12 10:28:12.761345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60117 ] 00:06:24.305 [2024-11-12 10:28:12.905307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.305 [2024-11-12 10:28:12.935997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.305 [2024-11-12 10:28:12.965660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.305  [2024-11-12T10:28:13.322Z] Copying: 512/512 [B] (average 500 kBps) 00:06:24.564 00:06:24.564 10:28:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3zfdgu5mg5g8dcu0urgfun253o6gp7bj9f6cxy1qslefrdpffazlenw538p0zufw4zqau03v6tjisizhph2stcqrhjikjfclolnn9p817hu8ynnvaeg17veirc6kbvowavuh56m5djmliaf4olkr18p6f07azt9ee3oad4qxffyejizk19t1mapnnxs8strxxrxg8ky1qikvrjpdqpdxrkndbjguictvrkexx4gps5mp6p2ncookbjrk45a8cdkexmot224jxflg04c1b70ry1729m8g3cnm75djifqregokq45tnea66nox38wrenjlsxgl09zhxmg3hhspbohiotum1vzv1lyghmd1dhx4s3mwzrfrrsgjdfhqa44nou48ah3h71n1jqayppjxzmihwylzi6litvvzk5r3wcxmz8cqwpbd5qp3bsv3jzl10rksws4kn30r9mms2lwg1dvxkmd21bxsbur99hffo1hyx4szvffk3bji8i76remgpm4x == \3\z\f\d\g\u\5\m\g\5\g\8\d\c\u\0\u\r\g\f\u\n\2\5\3\o\6\g\p\7\b\j\9\f\6\c\x\y\1\q\s\l\e\f\r\d\p\f\f\a\z\l\e\n\w\5\3\8\p\0\z\u\f\w\4\z\q\a\u\0\3\v\6\t\j\i\s\i\z\h\p\h\2\s\t\c\q\r\h\j\i\k\j\f\c\l\o\l\n\n\9\p\8\1\7\h\u\8\y\n\n\v\a\e\g\1\7\v\e\i\r\c\6\k\b\v\o\w\a\v\u\h\5\6\m\5\d\j\m\l\i\a\f\4\o\l\k\r\1\8\p\6\f\0\7\a\z\t\9\e\e\3\o\a\d\4\q\x\f\f\y\e\j\i\z\k\1\9\t\1\m\a\p\n\n\x\s\8\s\t\r\x\x\r\x\g\8\k\y\1\q\i\k\v\r\j\p\d\q\p\d\x\r\k\n\d\b\j\g\u\i\c\t\v\r\k\e\x\x\4\g\p\s\5\m\p\6\p\2\n\c\o\o\k\b\j\r\k\4\5\a\8\c\d\k\e\x\m\o\t\2\2\4\j\x\f\l\g\0\4\c\1\b\7\0\r\y\1\7\2\9\m\8\g\3\c\n\m\7\5\d\j\i\f\q\r\e\g\o\k\q\4\5\t\n\e\a\6\6\n\o\x\3\8\w\r\e\n\j\l\s\x\g\l\0\9\z\h\x\m\g\3\h\h\s\p\b\o\h\i\o\t\u\m\1\v\z\v\1\l\y\g\h\m\d\1\d\h\x\4\s\3\m\w\z\r\f\r\r\s\g\j\d\f\h\q\a\4\4\n\o\u\4\8\a\h\3\h\7\1\n\1\j\q\a\y\p\p\j\x\z\m\i\h\w\y\l\z\i\6\l\i\t\v\v\z\k\5\r\3\w\c\x\m\z\8\c\q\w\p\b\d\5\q\p\3\b\s\v\3\j\z\l\1\0\r\k\s\w\s\4\k\n\3\0\r\9\m\m\s\2\l\w\g\1\d\v\x\k\m\d\2\1\b\x\s\b\u\r\9\9\h\f\f\o\1\h\y\x\4\s\z\v\f\f\k\3\b\j\i\8\i\7\6\r\e\m\g\p\m\4\x ]] 00:06:24.564 10:28:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.564 10:28:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:24.564 [2024-11-12 10:28:13.148313] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:24.564 [2024-11-12 10:28:13.148408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:06:24.564 [2024-11-12 10:28:13.293081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.564 [2024-11-12 10:28:13.320503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.823 [2024-11-12 10:28:13.347840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.823  [2024-11-12T10:28:13.581Z] Copying: 512/512 [B] (average 500 kBps) 00:06:24.823 00:06:24.823 10:28:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3zfdgu5mg5g8dcu0urgfun253o6gp7bj9f6cxy1qslefrdpffazlenw538p0zufw4zqau03v6tjisizhph2stcqrhjikjfclolnn9p817hu8ynnvaeg17veirc6kbvowavuh56m5djmliaf4olkr18p6f07azt9ee3oad4qxffyejizk19t1mapnnxs8strxxrxg8ky1qikvrjpdqpdxrkndbjguictvrkexx4gps5mp6p2ncookbjrk45a8cdkexmot224jxflg04c1b70ry1729m8g3cnm75djifqregokq45tnea66nox38wrenjlsxgl09zhxmg3hhspbohiotum1vzv1lyghmd1dhx4s3mwzrfrrsgjdfhqa44nou48ah3h71n1jqayppjxzmihwylzi6litvvzk5r3wcxmz8cqwpbd5qp3bsv3jzl10rksws4kn30r9mms2lwg1dvxkmd21bxsbur99hffo1hyx4szvffk3bji8i76remgpm4x == \3\z\f\d\g\u\5\m\g\5\g\8\d\c\u\0\u\r\g\f\u\n\2\5\3\o\6\g\p\7\b\j\9\f\6\c\x\y\1\q\s\l\e\f\r\d\p\f\f\a\z\l\e\n\w\5\3\8\p\0\z\u\f\w\4\z\q\a\u\0\3\v\6\t\j\i\s\i\z\h\p\h\2\s\t\c\q\r\h\j\i\k\j\f\c\l\o\l\n\n\9\p\8\1\7\h\u\8\y\n\n\v\a\e\g\1\7\v\e\i\r\c\6\k\b\v\o\w\a\v\u\h\5\6\m\5\d\j\m\l\i\a\f\4\o\l\k\r\1\8\p\6\f\0\7\a\z\t\9\e\e\3\o\a\d\4\q\x\f\f\y\e\j\i\z\k\1\9\t\1\m\a\p\n\n\x\s\8\s\t\r\x\x\r\x\g\8\k\y\1\q\i\k\v\r\j\p\d\q\p\d\x\r\k\n\d\b\j\g\u\i\c\t\v\r\k\e\x\x\4\g\p\s\5\m\p\6\p\2\n\c\o\o\k\b\j\r\k\4\5\a\8\c\d\k\e\x\m\o\t\2\2\4\j\x\f\l\g\0\4\c\1\b\7\0\r\y\1\7\2\9\m\8\g\3\c\n\m\7\5\d\j\i\f\q\r\e\g\o\k\q\4\5\t\n\e\a\6\6\n\o\x\3\8\w\r\e\n\j\l\s\x\g\l\0\9\z\h\x\m\g\3\h\h\s\p\b\o\h\i\o\t\u\m\1\v\z\v\1\l\y\g\h\m\d\1\d\h\x\4\s\3\m\w\z\r\f\r\r\s\g\j\d\f\h\q\a\4\4\n\o\u\4\8\a\h\3\h\7\1\n\1\j\q\a\y\p\p\j\x\z\m\i\h\w\y\l\z\i\6\l\i\t\v\v\z\k\5\r\3\w\c\x\m\z\8\c\q\w\p\b\d\5\q\p\3\b\s\v\3\j\z\l\1\0\r\k\s\w\s\4\k\n\3\0\r\9\m\m\s\2\l\w\g\1\d\v\x\k\m\d\2\1\b\x\s\b\u\r\9\9\h\f\f\o\1\h\y\x\4\s\z\v\f\f\k\3\b\j\i\8\i\7\6\r\e\m\g\p\m\4\x ]] 00:06:24.823 10:28:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.823 10:28:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:24.823 [2024-11-12 10:28:13.528759] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:24.823 [2024-11-12 10:28:13.528860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60136 ] 00:06:25.083 [2024-11-12 10:28:13.673944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.083 [2024-11-12 10:28:13.706169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.083 [2024-11-12 10:28:13.734163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.083  [2024-11-12T10:28:14.100Z] Copying: 512/512 [B] (average 250 kBps) 00:06:25.342 00:06:25.342 10:28:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3zfdgu5mg5g8dcu0urgfun253o6gp7bj9f6cxy1qslefrdpffazlenw538p0zufw4zqau03v6tjisizhph2stcqrhjikjfclolnn9p817hu8ynnvaeg17veirc6kbvowavuh56m5djmliaf4olkr18p6f07azt9ee3oad4qxffyejizk19t1mapnnxs8strxxrxg8ky1qikvrjpdqpdxrkndbjguictvrkexx4gps5mp6p2ncookbjrk45a8cdkexmot224jxflg04c1b70ry1729m8g3cnm75djifqregokq45tnea66nox38wrenjlsxgl09zhxmg3hhspbohiotum1vzv1lyghmd1dhx4s3mwzrfrrsgjdfhqa44nou48ah3h71n1jqayppjxzmihwylzi6litvvzk5r3wcxmz8cqwpbd5qp3bsv3jzl10rksws4kn30r9mms2lwg1dvxkmd21bxsbur99hffo1hyx4szvffk3bji8i76remgpm4x == \3\z\f\d\g\u\5\m\g\5\g\8\d\c\u\0\u\r\g\f\u\n\2\5\3\o\6\g\p\7\b\j\9\f\6\c\x\y\1\q\s\l\e\f\r\d\p\f\f\a\z\l\e\n\w\5\3\8\p\0\z\u\f\w\4\z\q\a\u\0\3\v\6\t\j\i\s\i\z\h\p\h\2\s\t\c\q\r\h\j\i\k\j\f\c\l\o\l\n\n\9\p\8\1\7\h\u\8\y\n\n\v\a\e\g\1\7\v\e\i\r\c\6\k\b\v\o\w\a\v\u\h\5\6\m\5\d\j\m\l\i\a\f\4\o\l\k\r\1\8\p\6\f\0\7\a\z\t\9\e\e\3\o\a\d\4\q\x\f\f\y\e\j\i\z\k\1\9\t\1\m\a\p\n\n\x\s\8\s\t\r\x\x\r\x\g\8\k\y\1\q\i\k\v\r\j\p\d\q\p\d\x\r\k\n\d\b\j\g\u\i\c\t\v\r\k\e\x\x\4\g\p\s\5\m\p\6\p\2\n\c\o\o\k\b\j\r\k\4\5\a\8\c\d\k\e\x\m\o\t\2\2\4\j\x\f\l\g\0\4\c\1\b\7\0\r\y\1\7\2\9\m\8\g\3\c\n\m\7\5\d\j\i\f\q\r\e\g\o\k\q\4\5\t\n\e\a\6\6\n\o\x\3\8\w\r\e\n\j\l\s\x\g\l\0\9\z\h\x\m\g\3\h\h\s\p\b\o\h\i\o\t\u\m\1\v\z\v\1\l\y\g\h\m\d\1\d\h\x\4\s\3\m\w\z\r\f\r\r\s\g\j\d\f\h\q\a\4\4\n\o\u\4\8\a\h\3\h\7\1\n\1\j\q\a\y\p\p\j\x\z\m\i\h\w\y\l\z\i\6\l\i\t\v\v\z\k\5\r\3\w\c\x\m\z\8\c\q\w\p\b\d\5\q\p\3\b\s\v\3\j\z\l\1\0\r\k\s\w\s\4\k\n\3\0\r\9\m\m\s\2\l\w\g\1\d\v\x\k\m\d\2\1\b\x\s\b\u\r\9\9\h\f\f\o\1\h\y\x\4\s\z\v\f\f\k\3\b\j\i\8\i\7\6\r\e\m\g\p\m\4\x ]] 00:06:25.342 10:28:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.342 10:28:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:25.342 [2024-11-12 10:28:13.902590] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:25.342 [2024-11-12 10:28:13.902689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60140 ] 00:06:25.342 [2024-11-12 10:28:14.039055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.342 [2024-11-12 10:28:14.066438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.342 [2024-11-12 10:28:14.092861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.601  [2024-11-12T10:28:14.359Z] Copying: 512/512 [B] (average 166 kBps) 00:06:25.601 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3zfdgu5mg5g8dcu0urgfun253o6gp7bj9f6cxy1qslefrdpffazlenw538p0zufw4zqau03v6tjisizhph2stcqrhjikjfclolnn9p817hu8ynnvaeg17veirc6kbvowavuh56m5djmliaf4olkr18p6f07azt9ee3oad4qxffyejizk19t1mapnnxs8strxxrxg8ky1qikvrjpdqpdxrkndbjguictvrkexx4gps5mp6p2ncookbjrk45a8cdkexmot224jxflg04c1b70ry1729m8g3cnm75djifqregokq45tnea66nox38wrenjlsxgl09zhxmg3hhspbohiotum1vzv1lyghmd1dhx4s3mwzrfrrsgjdfhqa44nou48ah3h71n1jqayppjxzmihwylzi6litvvzk5r3wcxmz8cqwpbd5qp3bsv3jzl10rksws4kn30r9mms2lwg1dvxkmd21bxsbur99hffo1hyx4szvffk3bji8i76remgpm4x == \3\z\f\d\g\u\5\m\g\5\g\8\d\c\u\0\u\r\g\f\u\n\2\5\3\o\6\g\p\7\b\j\9\f\6\c\x\y\1\q\s\l\e\f\r\d\p\f\f\a\z\l\e\n\w\5\3\8\p\0\z\u\f\w\4\z\q\a\u\0\3\v\6\t\j\i\s\i\z\h\p\h\2\s\t\c\q\r\h\j\i\k\j\f\c\l\o\l\n\n\9\p\8\1\7\h\u\8\y\n\n\v\a\e\g\1\7\v\e\i\r\c\6\k\b\v\o\w\a\v\u\h\5\6\m\5\d\j\m\l\i\a\f\4\o\l\k\r\1\8\p\6\f\0\7\a\z\t\9\e\e\3\o\a\d\4\q\x\f\f\y\e\j\i\z\k\1\9\t\1\m\a\p\n\n\x\s\8\s\t\r\x\x\r\x\g\8\k\y\1\q\i\k\v\r\j\p\d\q\p\d\x\r\k\n\d\b\j\g\u\i\c\t\v\r\k\e\x\x\4\g\p\s\5\m\p\6\p\2\n\c\o\o\k\b\j\r\k\4\5\a\8\c\d\k\e\x\m\o\t\2\2\4\j\x\f\l\g\0\4\c\1\b\7\0\r\y\1\7\2\9\m\8\g\3\c\n\m\7\5\d\j\i\f\q\r\e\g\o\k\q\4\5\t\n\e\a\6\6\n\o\x\3\8\w\r\e\n\j\l\s\x\g\l\0\9\z\h\x\m\g\3\h\h\s\p\b\o\h\i\o\t\u\m\1\v\z\v\1\l\y\g\h\m\d\1\d\h\x\4\s\3\m\w\z\r\f\r\r\s\g\j\d\f\h\q\a\4\4\n\o\u\4\8\a\h\3\h\7\1\n\1\j\q\a\y\p\p\j\x\z\m\i\h\w\y\l\z\i\6\l\i\t\v\v\z\k\5\r\3\w\c\x\m\z\8\c\q\w\p\b\d\5\q\p\3\b\s\v\3\j\z\l\1\0\r\k\s\w\s\4\k\n\3\0\r\9\m\m\s\2\l\w\g\1\d\v\x\k\m\d\2\1\b\x\s\b\u\r\9\9\h\f\f\o\1\h\y\x\4\s\z\v\f\f\k\3\b\j\i\8\i\7\6\r\e\m\g\p\m\4\x ]] 00:06:25.601 00:06:25.601 real 0m3.118s 00:06:25.601 user 0m1.595s 00:06:25.601 sys 0m1.260s 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.601 ************************************ 00:06:25.601 END TEST dd_flags_misc 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:25.601 ************************************ 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:25.601 * Second test run, disabling liburing, forcing AIO 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:25.601 ************************************ 00:06:25.601 START TEST dd_flag_append_forced_aio 00:06:25.601 ************************************ 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1127 -- # append 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=rhqu6vx35drx54mauh6ev1gdbl1giu48 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=i334ofm0zc1dkd7juqkm1vvgws7hu9bz 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s rhqu6vx35drx54mauh6ev1gdbl1giu48 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s i334ofm0zc1dkd7juqkm1vvgws7hu9bz 00:06:25.601 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:25.601 [2024-11-12 10:28:14.344195] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:25.601 [2024-11-12 10:28:14.344292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60174 ] 00:06:25.860 [2024-11-12 10:28:14.488930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.860 [2024-11-12 10:28:14.516317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.860 [2024-11-12 10:28:14.543323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.860  [2024-11-12T10:28:14.878Z] Copying: 32/32 [B] (average 31 kBps) 00:06:26.120 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ i334ofm0zc1dkd7juqkm1vvgws7hu9bzrhqu6vx35drx54mauh6ev1gdbl1giu48 == \i\3\3\4\o\f\m\0\z\c\1\d\k\d\7\j\u\q\k\m\1\v\v\g\w\s\7\h\u\9\b\z\r\h\q\u\6\v\x\3\5\d\r\x\5\4\m\a\u\h\6\e\v\1\g\d\b\l\1\g\i\u\4\8 ]] 00:06:26.120 00:06:26.120 real 0m0.420s 00:06:26.120 user 0m0.214s 00:06:26.120 sys 0m0.085s 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.120 ************************************ 00:06:26.120 END TEST dd_flag_append_forced_aio 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:26.120 ************************************ 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:26.120 ************************************ 00:06:26.120 START TEST dd_flag_directory_forced_aio 00:06:26.120 ************************************ 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1127 -- # directory 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:26.120 10:28:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.120 [2024-11-12 10:28:14.803450] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:26.120 [2024-11-12 10:28:14.803527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60195 ] 00:06:26.380 [2024-11-12 10:28:14.940622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.380 [2024-11-12 10:28:14.969505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.380 [2024-11-12 10:28:15.000009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.380 [2024-11-12 10:28:15.017512] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:26.380 [2024-11-12 10:28:15.017577] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:26.380 [2024-11-12 10:28:15.017608] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.380 [2024-11-12 10:28:15.075867] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:26.380 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:26.639 [2024-11-12 10:28:15.184794] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:26.639 [2024-11-12 10:28:15.184884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60199 ] 00:06:26.639 [2024-11-12 10:28:15.328945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.639 [2024-11-12 10:28:15.356524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.639 [2024-11-12 10:28:15.385117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.899 [2024-11-12 10:28:15.402913] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:26.899 [2024-11-12 10:28:15.402981] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:26.899 [2024-11-12 10:28:15.403013] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.899 [2024-11-12 10:28:15.460903] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.899 00:06:26.899 real 0m0.758s 00:06:26.899 user 0m0.373s 00:06:26.899 sys 0m0.177s 00:06:26.899 ************************************ 00:06:26.899 END TEST dd_flag_directory_forced_aio 00:06:26.899 ************************************ 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:26.899 ************************************ 00:06:26.899 START TEST dd_flag_nofollow_forced_aio 00:06:26.899 ************************************ 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1127 -- # nofollow 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:26.899 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.899 [2024-11-12 10:28:15.627371] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:26.899 [2024-11-12 10:28:15.627466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60233 ] 00:06:27.158 [2024-11-12 10:28:15.772656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.158 [2024-11-12 10:28:15.805362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.158 [2024-11-12 10:28:15.833853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.158 [2024-11-12 10:28:15.853472] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:27.158 [2024-11-12 10:28:15.853524] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:27.158 [2024-11-12 10:28:15.853556] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.158 [2024-11-12 10:28:15.914631] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:27.417 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:27.417 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:27.418 10:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:27.418 [2024-11-12 10:28:16.022944] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:27.418 [2024-11-12 10:28:16.023040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60237 ] 00:06:27.418 [2024-11-12 10:28:16.167002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.677 [2024-11-12 10:28:16.195306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.677 [2024-11-12 10:28:16.223936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.677 [2024-11-12 10:28:16.242028] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:27.677 [2024-11-12 10:28:16.242103] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:27.677 [2024-11-12 10:28:16.242151] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.677 [2024-11-12 10:28:16.308519] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:27.677 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:27.677 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.677 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:27.677 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:27.677 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:27.677 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.677 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:27.677 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:27.677 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:27.677 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.677 [2024-11-12 10:28:16.407133] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:27.677 [2024-11-12 10:28:16.407244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60250 ] 00:06:27.936 [2024-11-12 10:28:16.544794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.936 [2024-11-12 10:28:16.573160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.936 [2024-11-12 10:28:16.600336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.936  [2024-11-12T10:28:16.953Z] Copying: 512/512 [B] (average 500 kBps) 00:06:28.195 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 91k0jvd7k6pyviorbyll0ivm4t3auw70o0j4mx151d20t8unk5tbhb5j60p0j46e1mp6u2os8f18sdio9m13pl8eo435w0qkeidhlsrjnimxkntkjt15thttrnsuk5q19qynih9zly3brg5mpvv3jvip316pgt2l5222faxydizgj2048wexi2teouwm5q5xfax0bx4z60qx29r6aobqke1vm06m20vem0655br700uzuk46dlur4wi782gtmgamsy2n596i3og84g0d9rvy635d7encjzssf3brur24fs6y0rwagycvurfktghse2c3h78q3562kumbkp1wod2o6whaaaugqeknop58o65oyu978w74ssctizidky7hk4ar7wzbwofpznkb9e3vgkhpclzuuokl6csal6smwua48io4ujzcd0hvdezspibfnr4pc68hb3q755wudmwwzmq15jduoe2hfr79ikk3rap3q4qh6p5bz2ozb0s4v3x4bh0k == \9\1\k\0\j\v\d\7\k\6\p\y\v\i\o\r\b\y\l\l\0\i\v\m\4\t\3\a\u\w\7\0\o\0\j\4\m\x\1\5\1\d\2\0\t\8\u\n\k\5\t\b\h\b\5\j\6\0\p\0\j\4\6\e\1\m\p\6\u\2\o\s\8\f\1\8\s\d\i\o\9\m\1\3\p\l\8\e\o\4\3\5\w\0\q\k\e\i\d\h\l\s\r\j\n\i\m\x\k\n\t\k\j\t\1\5\t\h\t\t\r\n\s\u\k\5\q\1\9\q\y\n\i\h\9\z\l\y\3\b\r\g\5\m\p\v\v\3\j\v\i\p\3\1\6\p\g\t\2\l\5\2\2\2\f\a\x\y\d\i\z\g\j\2\0\4\8\w\e\x\i\2\t\e\o\u\w\m\5\q\5\x\f\a\x\0\b\x\4\z\6\0\q\x\2\9\r\6\a\o\b\q\k\e\1\v\m\0\6\m\2\0\v\e\m\0\6\5\5\b\r\7\0\0\u\z\u\k\4\6\d\l\u\r\4\w\i\7\8\2\g\t\m\g\a\m\s\y\2\n\5\9\6\i\3\o\g\8\4\g\0\d\9\r\v\y\6\3\5\d\7\e\n\c\j\z\s\s\f\3\b\r\u\r\2\4\f\s\6\y\0\r\w\a\g\y\c\v\u\r\f\k\t\g\h\s\e\2\c\3\h\7\8\q\3\5\6\2\k\u\m\b\k\p\1\w\o\d\2\o\6\w\h\a\a\a\u\g\q\e\k\n\o\p\5\8\o\6\5\o\y\u\9\7\8\w\7\4\s\s\c\t\i\z\i\d\k\y\7\h\k\4\a\r\7\w\z\b\w\o\f\p\z\n\k\b\9\e\3\v\g\k\h\p\c\l\z\u\u\o\k\l\6\c\s\a\l\6\s\m\w\u\a\4\8\i\o\4\u\j\z\c\d\0\h\v\d\e\z\s\p\i\b\f\n\r\4\p\c\6\8\h\b\3\q\7\5\5\w\u\d\m\w\w\z\m\q\1\5\j\d\u\o\e\2\h\f\r\7\9\i\k\k\3\r\a\p\3\q\4\q\h\6\p\5\b\z\2\o\z\b\0\s\4\v\3\x\4\b\h\0\k ]] 00:06:28.195 00:06:28.195 real 0m1.187s 00:06:28.195 user 0m0.611s 00:06:28.195 sys 0m0.252s 00:06:28.195 ************************************ 00:06:28.195 END TEST dd_flag_nofollow_forced_aio 00:06:28.195 ************************************ 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:28.195 ************************************ 00:06:28.195 START TEST dd_flag_noatime_forced_aio 00:06:28.195 ************************************ 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1127 -- # noatime 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731407296 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731407296 00:06:28.195 10:28:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:29.132 10:28:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.132 [2024-11-12 10:28:17.865467] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:29.132 [2024-11-12 10:28:17.865570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60285 ] 00:06:29.391 [2024-11-12 10:28:18.004015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.391 [2024-11-12 10:28:18.032553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.391 [2024-11-12 10:28:18.063593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.391  [2024-11-12T10:28:18.408Z] Copying: 512/512 [B] (average 500 kBps) 00:06:29.650 00:06:29.650 10:28:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:29.650 10:28:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731407296 )) 00:06:29.650 10:28:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.650 10:28:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731407296 )) 00:06:29.650 10:28:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.650 [2024-11-12 10:28:18.286241] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:29.650 [2024-11-12 10:28:18.286353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60291 ] 00:06:29.909 [2024-11-12 10:28:18.429012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.909 [2024-11-12 10:28:18.455904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.909 [2024-11-12 10:28:18.482976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.909  [2024-11-12T10:28:18.667Z] Copying: 512/512 [B] (average 500 kBps) 00:06:29.909 00:06:29.909 10:28:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:29.909 10:28:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731407298 )) 00:06:29.909 00:06:29.909 real 0m1.827s 00:06:29.909 user 0m0.373s 00:06:29.909 sys 0m0.214s 00:06:29.909 ************************************ 00:06:29.909 END TEST dd_flag_noatime_forced_aio 00:06:29.909 ************************************ 00:06:29.909 10:28:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.909 10:28:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:30.169 ************************************ 00:06:30.169 START TEST dd_flags_misc_forced_aio 00:06:30.169 ************************************ 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1127 -- # io 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.169 10:28:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:30.169 [2024-11-12 10:28:18.744516] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:30.169 [2024-11-12 10:28:18.744609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60323 ] 00:06:30.169 [2024-11-12 10:28:18.896263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.428 [2024-11-12 10:28:18.934951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.428 [2024-11-12 10:28:18.968025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.428  [2024-11-12T10:28:19.186Z] Copying: 512/512 [B] (average 500 kBps) 00:06:30.428 00:06:30.428 10:28:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v3k3rlyjbcg9sj9nx7na3poo2zm6py0zap7u7i2pssi7kunykhh9s6vugb9kk6p5r57mesmdf8f57f7z2bam1jymmt7jox9w25qpz32idax99nbmb2shrsobl8f8iaxrt86c2s1fa9o09ulfkepw3dd3plsu7xh4643den4fzmg315dgo6imwbnmcpiid7szh416uk90fars43otty2smy3n0adlwumb2sy4pw8dff5klne7r2unqzgm58nffzb9xhjzd25x0aly6ia19k2nb4f8bvejr2e959nn22bf67o83nn0lp4bxdtl97gcagk2pkq8oe1lu9w5ufo1fph0402fmgywo4thmymvlay3akhascbsc3uwevu4p7tl6n1pfrr0vcbubjnn36rlvq3ahrudupb7ltf7qc92c5lfwyj1qzfgsh073o1wpapmgvh5kxth71iuxrmnbc86ym71bprm1sn4ypzylin4fglu1jvf0qkt4xmf6n2w64v6mppt == \v\3\k\3\r\l\y\j\b\c\g\9\s\j\9\n\x\7\n\a\3\p\o\o\2\z\m\6\p\y\0\z\a\p\7\u\7\i\2\p\s\s\i\7\k\u\n\y\k\h\h\9\s\6\v\u\g\b\9\k\k\6\p\5\r\5\7\m\e\s\m\d\f\8\f\5\7\f\7\z\2\b\a\m\1\j\y\m\m\t\7\j\o\x\9\w\2\5\q\p\z\3\2\i\d\a\x\9\9\n\b\m\b\2\s\h\r\s\o\b\l\8\f\8\i\a\x\r\t\8\6\c\2\s\1\f\a\9\o\0\9\u\l\f\k\e\p\w\3\d\d\3\p\l\s\u\7\x\h\4\6\4\3\d\e\n\4\f\z\m\g\3\1\5\d\g\o\6\i\m\w\b\n\m\c\p\i\i\d\7\s\z\h\4\1\6\u\k\9\0\f\a\r\s\4\3\o\t\t\y\2\s\m\y\3\n\0\a\d\l\w\u\m\b\2\s\y\4\p\w\8\d\f\f\5\k\l\n\e\7\r\2\u\n\q\z\g\m\5\8\n\f\f\z\b\9\x\h\j\z\d\2\5\x\0\a\l\y\6\i\a\1\9\k\2\n\b\4\f\8\b\v\e\j\r\2\e\9\5\9\n\n\2\2\b\f\6\7\o\8\3\n\n\0\l\p\4\b\x\d\t\l\9\7\g\c\a\g\k\2\p\k\q\8\o\e\1\l\u\9\w\5\u\f\o\1\f\p\h\0\4\0\2\f\m\g\y\w\o\4\t\h\m\y\m\v\l\a\y\3\a\k\h\a\s\c\b\s\c\3\u\w\e\v\u\4\p\7\t\l\6\n\1\p\f\r\r\0\v\c\b\u\b\j\n\n\3\6\r\l\v\q\3\a\h\r\u\d\u\p\b\7\l\t\f\7\q\c\9\2\c\5\l\f\w\y\j\1\q\z\f\g\s\h\0\7\3\o\1\w\p\a\p\m\g\v\h\5\k\x\t\h\7\1\i\u\x\r\m\n\b\c\8\6\y\m\7\1\b\p\r\m\1\s\n\4\y\p\z\y\l\i\n\4\f\g\l\u\1\j\v\f\0\q\k\t\4\x\m\f\6\n\2\w\6\4\v\6\m\p\p\t ]] 00:06:30.428 10:28:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.428 10:28:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:30.428 [2024-11-12 10:28:19.161944] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:30.428 [2024-11-12 10:28:19.162028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60325 ] 00:06:30.687 [2024-11-12 10:28:19.291565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.687 [2024-11-12 10:28:19.318648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.687 [2024-11-12 10:28:19.346803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.687  [2024-11-12T10:28:19.704Z] Copying: 512/512 [B] (average 500 kBps) 00:06:30.946 00:06:30.947 10:28:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v3k3rlyjbcg9sj9nx7na3poo2zm6py0zap7u7i2pssi7kunykhh9s6vugb9kk6p5r57mesmdf8f57f7z2bam1jymmt7jox9w25qpz32idax99nbmb2shrsobl8f8iaxrt86c2s1fa9o09ulfkepw3dd3plsu7xh4643den4fzmg315dgo6imwbnmcpiid7szh416uk90fars43otty2smy3n0adlwumb2sy4pw8dff5klne7r2unqzgm58nffzb9xhjzd25x0aly6ia19k2nb4f8bvejr2e959nn22bf67o83nn0lp4bxdtl97gcagk2pkq8oe1lu9w5ufo1fph0402fmgywo4thmymvlay3akhascbsc3uwevu4p7tl6n1pfrr0vcbubjnn36rlvq3ahrudupb7ltf7qc92c5lfwyj1qzfgsh073o1wpapmgvh5kxth71iuxrmnbc86ym71bprm1sn4ypzylin4fglu1jvf0qkt4xmf6n2w64v6mppt == \v\3\k\3\r\l\y\j\b\c\g\9\s\j\9\n\x\7\n\a\3\p\o\o\2\z\m\6\p\y\0\z\a\p\7\u\7\i\2\p\s\s\i\7\k\u\n\y\k\h\h\9\s\6\v\u\g\b\9\k\k\6\p\5\r\5\7\m\e\s\m\d\f\8\f\5\7\f\7\z\2\b\a\m\1\j\y\m\m\t\7\j\o\x\9\w\2\5\q\p\z\3\2\i\d\a\x\9\9\n\b\m\b\2\s\h\r\s\o\b\l\8\f\8\i\a\x\r\t\8\6\c\2\s\1\f\a\9\o\0\9\u\l\f\k\e\p\w\3\d\d\3\p\l\s\u\7\x\h\4\6\4\3\d\e\n\4\f\z\m\g\3\1\5\d\g\o\6\i\m\w\b\n\m\c\p\i\i\d\7\s\z\h\4\1\6\u\k\9\0\f\a\r\s\4\3\o\t\t\y\2\s\m\y\3\n\0\a\d\l\w\u\m\b\2\s\y\4\p\w\8\d\f\f\5\k\l\n\e\7\r\2\u\n\q\z\g\m\5\8\n\f\f\z\b\9\x\h\j\z\d\2\5\x\0\a\l\y\6\i\a\1\9\k\2\n\b\4\f\8\b\v\e\j\r\2\e\9\5\9\n\n\2\2\b\f\6\7\o\8\3\n\n\0\l\p\4\b\x\d\t\l\9\7\g\c\a\g\k\2\p\k\q\8\o\e\1\l\u\9\w\5\u\f\o\1\f\p\h\0\4\0\2\f\m\g\y\w\o\4\t\h\m\y\m\v\l\a\y\3\a\k\h\a\s\c\b\s\c\3\u\w\e\v\u\4\p\7\t\l\6\n\1\p\f\r\r\0\v\c\b\u\b\j\n\n\3\6\r\l\v\q\3\a\h\r\u\d\u\p\b\7\l\t\f\7\q\c\9\2\c\5\l\f\w\y\j\1\q\z\f\g\s\h\0\7\3\o\1\w\p\a\p\m\g\v\h\5\k\x\t\h\7\1\i\u\x\r\m\n\b\c\8\6\y\m\7\1\b\p\r\m\1\s\n\4\y\p\z\y\l\i\n\4\f\g\l\u\1\j\v\f\0\q\k\t\4\x\m\f\6\n\2\w\6\4\v\6\m\p\p\t ]] 00:06:30.947 10:28:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.947 10:28:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:30.947 [2024-11-12 10:28:19.546697] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:30.947 [2024-11-12 10:28:19.546802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60338 ] 00:06:30.947 [2024-11-12 10:28:19.689099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.206 [2024-11-12 10:28:19.716585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.206 [2024-11-12 10:28:19.742550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.206  [2024-11-12T10:28:19.964Z] Copying: 512/512 [B] (average 125 kBps) 00:06:31.206 00:06:31.206 10:28:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v3k3rlyjbcg9sj9nx7na3poo2zm6py0zap7u7i2pssi7kunykhh9s6vugb9kk6p5r57mesmdf8f57f7z2bam1jymmt7jox9w25qpz32idax99nbmb2shrsobl8f8iaxrt86c2s1fa9o09ulfkepw3dd3plsu7xh4643den4fzmg315dgo6imwbnmcpiid7szh416uk90fars43otty2smy3n0adlwumb2sy4pw8dff5klne7r2unqzgm58nffzb9xhjzd25x0aly6ia19k2nb4f8bvejr2e959nn22bf67o83nn0lp4bxdtl97gcagk2pkq8oe1lu9w5ufo1fph0402fmgywo4thmymvlay3akhascbsc3uwevu4p7tl6n1pfrr0vcbubjnn36rlvq3ahrudupb7ltf7qc92c5lfwyj1qzfgsh073o1wpapmgvh5kxth71iuxrmnbc86ym71bprm1sn4ypzylin4fglu1jvf0qkt4xmf6n2w64v6mppt == \v\3\k\3\r\l\y\j\b\c\g\9\s\j\9\n\x\7\n\a\3\p\o\o\2\z\m\6\p\y\0\z\a\p\7\u\7\i\2\p\s\s\i\7\k\u\n\y\k\h\h\9\s\6\v\u\g\b\9\k\k\6\p\5\r\5\7\m\e\s\m\d\f\8\f\5\7\f\7\z\2\b\a\m\1\j\y\m\m\t\7\j\o\x\9\w\2\5\q\p\z\3\2\i\d\a\x\9\9\n\b\m\b\2\s\h\r\s\o\b\l\8\f\8\i\a\x\r\t\8\6\c\2\s\1\f\a\9\o\0\9\u\l\f\k\e\p\w\3\d\d\3\p\l\s\u\7\x\h\4\6\4\3\d\e\n\4\f\z\m\g\3\1\5\d\g\o\6\i\m\w\b\n\m\c\p\i\i\d\7\s\z\h\4\1\6\u\k\9\0\f\a\r\s\4\3\o\t\t\y\2\s\m\y\3\n\0\a\d\l\w\u\m\b\2\s\y\4\p\w\8\d\f\f\5\k\l\n\e\7\r\2\u\n\q\z\g\m\5\8\n\f\f\z\b\9\x\h\j\z\d\2\5\x\0\a\l\y\6\i\a\1\9\k\2\n\b\4\f\8\b\v\e\j\r\2\e\9\5\9\n\n\2\2\b\f\6\7\o\8\3\n\n\0\l\p\4\b\x\d\t\l\9\7\g\c\a\g\k\2\p\k\q\8\o\e\1\l\u\9\w\5\u\f\o\1\f\p\h\0\4\0\2\f\m\g\y\w\o\4\t\h\m\y\m\v\l\a\y\3\a\k\h\a\s\c\b\s\c\3\u\w\e\v\u\4\p\7\t\l\6\n\1\p\f\r\r\0\v\c\b\u\b\j\n\n\3\6\r\l\v\q\3\a\h\r\u\d\u\p\b\7\l\t\f\7\q\c\9\2\c\5\l\f\w\y\j\1\q\z\f\g\s\h\0\7\3\o\1\w\p\a\p\m\g\v\h\5\k\x\t\h\7\1\i\u\x\r\m\n\b\c\8\6\y\m\7\1\b\p\r\m\1\s\n\4\y\p\z\y\l\i\n\4\f\g\l\u\1\j\v\f\0\q\k\t\4\x\m\f\6\n\2\w\6\4\v\6\m\p\p\t ]] 00:06:31.206 10:28:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.206 10:28:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:31.206 [2024-11-12 10:28:19.942321] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:31.206 [2024-11-12 10:28:19.942435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60340 ] 00:06:31.465 [2024-11-12 10:28:20.089987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.465 [2024-11-12 10:28:20.120893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.465 [2024-11-12 10:28:20.148774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.465  [2024-11-12T10:28:20.482Z] Copying: 512/512 [B] (average 500 kBps) 00:06:31.725 00:06:31.725 10:28:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v3k3rlyjbcg9sj9nx7na3poo2zm6py0zap7u7i2pssi7kunykhh9s6vugb9kk6p5r57mesmdf8f57f7z2bam1jymmt7jox9w25qpz32idax99nbmb2shrsobl8f8iaxrt86c2s1fa9o09ulfkepw3dd3plsu7xh4643den4fzmg315dgo6imwbnmcpiid7szh416uk90fars43otty2smy3n0adlwumb2sy4pw8dff5klne7r2unqzgm58nffzb9xhjzd25x0aly6ia19k2nb4f8bvejr2e959nn22bf67o83nn0lp4bxdtl97gcagk2pkq8oe1lu9w5ufo1fph0402fmgywo4thmymvlay3akhascbsc3uwevu4p7tl6n1pfrr0vcbubjnn36rlvq3ahrudupb7ltf7qc92c5lfwyj1qzfgsh073o1wpapmgvh5kxth71iuxrmnbc86ym71bprm1sn4ypzylin4fglu1jvf0qkt4xmf6n2w64v6mppt == \v\3\k\3\r\l\y\j\b\c\g\9\s\j\9\n\x\7\n\a\3\p\o\o\2\z\m\6\p\y\0\z\a\p\7\u\7\i\2\p\s\s\i\7\k\u\n\y\k\h\h\9\s\6\v\u\g\b\9\k\k\6\p\5\r\5\7\m\e\s\m\d\f\8\f\5\7\f\7\z\2\b\a\m\1\j\y\m\m\t\7\j\o\x\9\w\2\5\q\p\z\3\2\i\d\a\x\9\9\n\b\m\b\2\s\h\r\s\o\b\l\8\f\8\i\a\x\r\t\8\6\c\2\s\1\f\a\9\o\0\9\u\l\f\k\e\p\w\3\d\d\3\p\l\s\u\7\x\h\4\6\4\3\d\e\n\4\f\z\m\g\3\1\5\d\g\o\6\i\m\w\b\n\m\c\p\i\i\d\7\s\z\h\4\1\6\u\k\9\0\f\a\r\s\4\3\o\t\t\y\2\s\m\y\3\n\0\a\d\l\w\u\m\b\2\s\y\4\p\w\8\d\f\f\5\k\l\n\e\7\r\2\u\n\q\z\g\m\5\8\n\f\f\z\b\9\x\h\j\z\d\2\5\x\0\a\l\y\6\i\a\1\9\k\2\n\b\4\f\8\b\v\e\j\r\2\e\9\5\9\n\n\2\2\b\f\6\7\o\8\3\n\n\0\l\p\4\b\x\d\t\l\9\7\g\c\a\g\k\2\p\k\q\8\o\e\1\l\u\9\w\5\u\f\o\1\f\p\h\0\4\0\2\f\m\g\y\w\o\4\t\h\m\y\m\v\l\a\y\3\a\k\h\a\s\c\b\s\c\3\u\w\e\v\u\4\p\7\t\l\6\n\1\p\f\r\r\0\v\c\b\u\b\j\n\n\3\6\r\l\v\q\3\a\h\r\u\d\u\p\b\7\l\t\f\7\q\c\9\2\c\5\l\f\w\y\j\1\q\z\f\g\s\h\0\7\3\o\1\w\p\a\p\m\g\v\h\5\k\x\t\h\7\1\i\u\x\r\m\n\b\c\8\6\y\m\7\1\b\p\r\m\1\s\n\4\y\p\z\y\l\i\n\4\f\g\l\u\1\j\v\f\0\q\k\t\4\x\m\f\6\n\2\w\6\4\v\6\m\p\p\t ]] 00:06:31.725 10:28:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:31.725 10:28:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:31.725 10:28:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:31.725 10:28:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:31.725 10:28:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.725 10:28:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:31.725 [2024-11-12 10:28:20.347885] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:31.725 [2024-11-12 10:28:20.347990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60342 ] 00:06:31.725 [2024-11-12 10:28:20.481213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.985 [2024-11-12 10:28:20.509178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.985 [2024-11-12 10:28:20.534930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.985  [2024-11-12T10:28:20.743Z] Copying: 512/512 [B] (average 500 kBps) 00:06:31.985 00:06:31.985 10:28:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iqz2e75wu1byvok5pvg3lp7hz2jzb1lchsqm1067f70vrzpz8u08q9ccnskppowlcil4i6uz36brvzp4nhywwozmp86i11mytlpv8u37m9i45vgtfuw7dtr7m1b4x675a0sttq5nrbzczmjeg7u8hdclwdh7csv06rcv2dhvaatrk0tbo60itygyj0pjlqk4u1s7nm6ze2ulb3l2zkczk5cp02sgbhha0fss4jmevrjbqrhp2vt86ywjd4xkkhtk716007v4ulvtyh2ngbgy888bpi8f4lfyhple6eqkypbxgz5ubs53l50o1igx9i7p9k2c251bkuof7yneblzy83kzy7sso701ae14mcwtctnp6kri9gdqznfk0i83vr5229votizb4fkzat764uzr6vhnonr1ouoc0b4rgwfpua8qnj1d068vzfhwtw8fxkfuke0v4x0zjkjqfnu49yqyirmai3l0p0qoqwarn47vybc05e6sh2mkslxcpvql7j11 == \i\q\z\2\e\7\5\w\u\1\b\y\v\o\k\5\p\v\g\3\l\p\7\h\z\2\j\z\b\1\l\c\h\s\q\m\1\0\6\7\f\7\0\v\r\z\p\z\8\u\0\8\q\9\c\c\n\s\k\p\p\o\w\l\c\i\l\4\i\6\u\z\3\6\b\r\v\z\p\4\n\h\y\w\w\o\z\m\p\8\6\i\1\1\m\y\t\l\p\v\8\u\3\7\m\9\i\4\5\v\g\t\f\u\w\7\d\t\r\7\m\1\b\4\x\6\7\5\a\0\s\t\t\q\5\n\r\b\z\c\z\m\j\e\g\7\u\8\h\d\c\l\w\d\h\7\c\s\v\0\6\r\c\v\2\d\h\v\a\a\t\r\k\0\t\b\o\6\0\i\t\y\g\y\j\0\p\j\l\q\k\4\u\1\s\7\n\m\6\z\e\2\u\l\b\3\l\2\z\k\c\z\k\5\c\p\0\2\s\g\b\h\h\a\0\f\s\s\4\j\m\e\v\r\j\b\q\r\h\p\2\v\t\8\6\y\w\j\d\4\x\k\k\h\t\k\7\1\6\0\0\7\v\4\u\l\v\t\y\h\2\n\g\b\g\y\8\8\8\b\p\i\8\f\4\l\f\y\h\p\l\e\6\e\q\k\y\p\b\x\g\z\5\u\b\s\5\3\l\5\0\o\1\i\g\x\9\i\7\p\9\k\2\c\2\5\1\b\k\u\o\f\7\y\n\e\b\l\z\y\8\3\k\z\y\7\s\s\o\7\0\1\a\e\1\4\m\c\w\t\c\t\n\p\6\k\r\i\9\g\d\q\z\n\f\k\0\i\8\3\v\r\5\2\2\9\v\o\t\i\z\b\4\f\k\z\a\t\7\6\4\u\z\r\6\v\h\n\o\n\r\1\o\u\o\c\0\b\4\r\g\w\f\p\u\a\8\q\n\j\1\d\0\6\8\v\z\f\h\w\t\w\8\f\x\k\f\u\k\e\0\v\4\x\0\z\j\k\j\q\f\n\u\4\9\y\q\y\i\r\m\a\i\3\l\0\p\0\q\o\q\w\a\r\n\4\7\v\y\b\c\0\5\e\6\s\h\2\m\k\s\l\x\c\p\v\q\l\7\j\1\1 ]] 00:06:31.985 10:28:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.985 10:28:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:31.985 [2024-11-12 10:28:20.736075] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:31.985 [2024-11-12 10:28:20.736182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60355 ] 00:06:32.244 [2024-11-12 10:28:20.881669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.244 [2024-11-12 10:28:20.908262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.244 [2024-11-12 10:28:20.934040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.244  [2024-11-12T10:28:21.262Z] Copying: 512/512 [B] (average 500 kBps) 00:06:32.504 00:06:32.504 10:28:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iqz2e75wu1byvok5pvg3lp7hz2jzb1lchsqm1067f70vrzpz8u08q9ccnskppowlcil4i6uz36brvzp4nhywwozmp86i11mytlpv8u37m9i45vgtfuw7dtr7m1b4x675a0sttq5nrbzczmjeg7u8hdclwdh7csv06rcv2dhvaatrk0tbo60itygyj0pjlqk4u1s7nm6ze2ulb3l2zkczk5cp02sgbhha0fss4jmevrjbqrhp2vt86ywjd4xkkhtk716007v4ulvtyh2ngbgy888bpi8f4lfyhple6eqkypbxgz5ubs53l50o1igx9i7p9k2c251bkuof7yneblzy83kzy7sso701ae14mcwtctnp6kri9gdqznfk0i83vr5229votizb4fkzat764uzr6vhnonr1ouoc0b4rgwfpua8qnj1d068vzfhwtw8fxkfuke0v4x0zjkjqfnu49yqyirmai3l0p0qoqwarn47vybc05e6sh2mkslxcpvql7j11 == \i\q\z\2\e\7\5\w\u\1\b\y\v\o\k\5\p\v\g\3\l\p\7\h\z\2\j\z\b\1\l\c\h\s\q\m\1\0\6\7\f\7\0\v\r\z\p\z\8\u\0\8\q\9\c\c\n\s\k\p\p\o\w\l\c\i\l\4\i\6\u\z\3\6\b\r\v\z\p\4\n\h\y\w\w\o\z\m\p\8\6\i\1\1\m\y\t\l\p\v\8\u\3\7\m\9\i\4\5\v\g\t\f\u\w\7\d\t\r\7\m\1\b\4\x\6\7\5\a\0\s\t\t\q\5\n\r\b\z\c\z\m\j\e\g\7\u\8\h\d\c\l\w\d\h\7\c\s\v\0\6\r\c\v\2\d\h\v\a\a\t\r\k\0\t\b\o\6\0\i\t\y\g\y\j\0\p\j\l\q\k\4\u\1\s\7\n\m\6\z\e\2\u\l\b\3\l\2\z\k\c\z\k\5\c\p\0\2\s\g\b\h\h\a\0\f\s\s\4\j\m\e\v\r\j\b\q\r\h\p\2\v\t\8\6\y\w\j\d\4\x\k\k\h\t\k\7\1\6\0\0\7\v\4\u\l\v\t\y\h\2\n\g\b\g\y\8\8\8\b\p\i\8\f\4\l\f\y\h\p\l\e\6\e\q\k\y\p\b\x\g\z\5\u\b\s\5\3\l\5\0\o\1\i\g\x\9\i\7\p\9\k\2\c\2\5\1\b\k\u\o\f\7\y\n\e\b\l\z\y\8\3\k\z\y\7\s\s\o\7\0\1\a\e\1\4\m\c\w\t\c\t\n\p\6\k\r\i\9\g\d\q\z\n\f\k\0\i\8\3\v\r\5\2\2\9\v\o\t\i\z\b\4\f\k\z\a\t\7\6\4\u\z\r\6\v\h\n\o\n\r\1\o\u\o\c\0\b\4\r\g\w\f\p\u\a\8\q\n\j\1\d\0\6\8\v\z\f\h\w\t\w\8\f\x\k\f\u\k\e\0\v\4\x\0\z\j\k\j\q\f\n\u\4\9\y\q\y\i\r\m\a\i\3\l\0\p\0\q\o\q\w\a\r\n\4\7\v\y\b\c\0\5\e\6\s\h\2\m\k\s\l\x\c\p\v\q\l\7\j\1\1 ]] 00:06:32.504 10:28:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:32.504 10:28:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:32.504 [2024-11-12 10:28:21.111793] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:32.504 [2024-11-12 10:28:21.111900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60357 ] 00:06:32.504 [2024-11-12 10:28:21.247476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.762 [2024-11-12 10:28:21.275663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.762 [2024-11-12 10:28:21.301525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.762  [2024-11-12T10:28:21.520Z] Copying: 512/512 [B] (average 100 kBps) 00:06:32.762 00:06:32.762 10:28:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iqz2e75wu1byvok5pvg3lp7hz2jzb1lchsqm1067f70vrzpz8u08q9ccnskppowlcil4i6uz36brvzp4nhywwozmp86i11mytlpv8u37m9i45vgtfuw7dtr7m1b4x675a0sttq5nrbzczmjeg7u8hdclwdh7csv06rcv2dhvaatrk0tbo60itygyj0pjlqk4u1s7nm6ze2ulb3l2zkczk5cp02sgbhha0fss4jmevrjbqrhp2vt86ywjd4xkkhtk716007v4ulvtyh2ngbgy888bpi8f4lfyhple6eqkypbxgz5ubs53l50o1igx9i7p9k2c251bkuof7yneblzy83kzy7sso701ae14mcwtctnp6kri9gdqznfk0i83vr5229votizb4fkzat764uzr6vhnonr1ouoc0b4rgwfpua8qnj1d068vzfhwtw8fxkfuke0v4x0zjkjqfnu49yqyirmai3l0p0qoqwarn47vybc05e6sh2mkslxcpvql7j11 == \i\q\z\2\e\7\5\w\u\1\b\y\v\o\k\5\p\v\g\3\l\p\7\h\z\2\j\z\b\1\l\c\h\s\q\m\1\0\6\7\f\7\0\v\r\z\p\z\8\u\0\8\q\9\c\c\n\s\k\p\p\o\w\l\c\i\l\4\i\6\u\z\3\6\b\r\v\z\p\4\n\h\y\w\w\o\z\m\p\8\6\i\1\1\m\y\t\l\p\v\8\u\3\7\m\9\i\4\5\v\g\t\f\u\w\7\d\t\r\7\m\1\b\4\x\6\7\5\a\0\s\t\t\q\5\n\r\b\z\c\z\m\j\e\g\7\u\8\h\d\c\l\w\d\h\7\c\s\v\0\6\r\c\v\2\d\h\v\a\a\t\r\k\0\t\b\o\6\0\i\t\y\g\y\j\0\p\j\l\q\k\4\u\1\s\7\n\m\6\z\e\2\u\l\b\3\l\2\z\k\c\z\k\5\c\p\0\2\s\g\b\h\h\a\0\f\s\s\4\j\m\e\v\r\j\b\q\r\h\p\2\v\t\8\6\y\w\j\d\4\x\k\k\h\t\k\7\1\6\0\0\7\v\4\u\l\v\t\y\h\2\n\g\b\g\y\8\8\8\b\p\i\8\f\4\l\f\y\h\p\l\e\6\e\q\k\y\p\b\x\g\z\5\u\b\s\5\3\l\5\0\o\1\i\g\x\9\i\7\p\9\k\2\c\2\5\1\b\k\u\o\f\7\y\n\e\b\l\z\y\8\3\k\z\y\7\s\s\o\7\0\1\a\e\1\4\m\c\w\t\c\t\n\p\6\k\r\i\9\g\d\q\z\n\f\k\0\i\8\3\v\r\5\2\2\9\v\o\t\i\z\b\4\f\k\z\a\t\7\6\4\u\z\r\6\v\h\n\o\n\r\1\o\u\o\c\0\b\4\r\g\w\f\p\u\a\8\q\n\j\1\d\0\6\8\v\z\f\h\w\t\w\8\f\x\k\f\u\k\e\0\v\4\x\0\z\j\k\j\q\f\n\u\4\9\y\q\y\i\r\m\a\i\3\l\0\p\0\q\o\q\w\a\r\n\4\7\v\y\b\c\0\5\e\6\s\h\2\m\k\s\l\x\c\p\v\q\l\7\j\1\1 ]] 00:06:32.762 10:28:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:32.762 10:28:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:32.762 [2024-11-12 10:28:21.495305] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:32.762 [2024-11-12 10:28:21.495384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60365 ] 00:06:33.020 [2024-11-12 10:28:21.637221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.020 [2024-11-12 10:28:21.664230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.020 [2024-11-12 10:28:21.689782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.020  [2024-11-12T10:28:22.037Z] Copying: 512/512 [B] (average 500 kBps) 00:06:33.279 00:06:33.279 10:28:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iqz2e75wu1byvok5pvg3lp7hz2jzb1lchsqm1067f70vrzpz8u08q9ccnskppowlcil4i6uz36brvzp4nhywwozmp86i11mytlpv8u37m9i45vgtfuw7dtr7m1b4x675a0sttq5nrbzczmjeg7u8hdclwdh7csv06rcv2dhvaatrk0tbo60itygyj0pjlqk4u1s7nm6ze2ulb3l2zkczk5cp02sgbhha0fss4jmevrjbqrhp2vt86ywjd4xkkhtk716007v4ulvtyh2ngbgy888bpi8f4lfyhple6eqkypbxgz5ubs53l50o1igx9i7p9k2c251bkuof7yneblzy83kzy7sso701ae14mcwtctnp6kri9gdqznfk0i83vr5229votizb4fkzat764uzr6vhnonr1ouoc0b4rgwfpua8qnj1d068vzfhwtw8fxkfuke0v4x0zjkjqfnu49yqyirmai3l0p0qoqwarn47vybc05e6sh2mkslxcpvql7j11 == \i\q\z\2\e\7\5\w\u\1\b\y\v\o\k\5\p\v\g\3\l\p\7\h\z\2\j\z\b\1\l\c\h\s\q\m\1\0\6\7\f\7\0\v\r\z\p\z\8\u\0\8\q\9\c\c\n\s\k\p\p\o\w\l\c\i\l\4\i\6\u\z\3\6\b\r\v\z\p\4\n\h\y\w\w\o\z\m\p\8\6\i\1\1\m\y\t\l\p\v\8\u\3\7\m\9\i\4\5\v\g\t\f\u\w\7\d\t\r\7\m\1\b\4\x\6\7\5\a\0\s\t\t\q\5\n\r\b\z\c\z\m\j\e\g\7\u\8\h\d\c\l\w\d\h\7\c\s\v\0\6\r\c\v\2\d\h\v\a\a\t\r\k\0\t\b\o\6\0\i\t\y\g\y\j\0\p\j\l\q\k\4\u\1\s\7\n\m\6\z\e\2\u\l\b\3\l\2\z\k\c\z\k\5\c\p\0\2\s\g\b\h\h\a\0\f\s\s\4\j\m\e\v\r\j\b\q\r\h\p\2\v\t\8\6\y\w\j\d\4\x\k\k\h\t\k\7\1\6\0\0\7\v\4\u\l\v\t\y\h\2\n\g\b\g\y\8\8\8\b\p\i\8\f\4\l\f\y\h\p\l\e\6\e\q\k\y\p\b\x\g\z\5\u\b\s\5\3\l\5\0\o\1\i\g\x\9\i\7\p\9\k\2\c\2\5\1\b\k\u\o\f\7\y\n\e\b\l\z\y\8\3\k\z\y\7\s\s\o\7\0\1\a\e\1\4\m\c\w\t\c\t\n\p\6\k\r\i\9\g\d\q\z\n\f\k\0\i\8\3\v\r\5\2\2\9\v\o\t\i\z\b\4\f\k\z\a\t\7\6\4\u\z\r\6\v\h\n\o\n\r\1\o\u\o\c\0\b\4\r\g\w\f\p\u\a\8\q\n\j\1\d\0\6\8\v\z\f\h\w\t\w\8\f\x\k\f\u\k\e\0\v\4\x\0\z\j\k\j\q\f\n\u\4\9\y\q\y\i\r\m\a\i\3\l\0\p\0\q\o\q\w\a\r\n\4\7\v\y\b\c\0\5\e\6\s\h\2\m\k\s\l\x\c\p\v\q\l\7\j\1\1 ]] 00:06:33.279 00:06:33.279 real 0m3.157s 00:06:33.279 user 0m1.513s 00:06:33.279 sys 0m0.660s 00:06:33.279 10:28:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:33.279 10:28:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:33.279 ************************************ 00:06:33.279 END TEST dd_flags_misc_forced_aio 00:06:33.279 ************************************ 00:06:33.279 10:28:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:33.279 10:28:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:33.279 10:28:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:33.279 00:06:33.279 real 0m15.400s 00:06:33.280 user 0m6.605s 00:06:33.280 sys 0m4.071s 00:06:33.280 10:28:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:33.280 10:28:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:33.280 ************************************ 00:06:33.280 END TEST spdk_dd_posix 00:06:33.280 ************************************ 00:06:33.280 10:28:21 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:33.280 10:28:21 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:33.280 10:28:21 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.280 10:28:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:33.280 ************************************ 00:06:33.280 START TEST spdk_dd_malloc 00:06:33.280 ************************************ 00:06:33.280 10:28:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:33.280 * Looking for test storage... 00:06:33.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:33.280 10:28:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:33.280 10:28:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:33.280 10:28:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.539 10:28:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:33.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.539 --rc genhtml_branch_coverage=1 00:06:33.540 --rc genhtml_function_coverage=1 00:06:33.540 --rc genhtml_legend=1 00:06:33.540 --rc geninfo_all_blocks=1 00:06:33.540 --rc geninfo_unexecuted_blocks=1 00:06:33.540 00:06:33.540 ' 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:33.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.540 --rc genhtml_branch_coverage=1 00:06:33.540 --rc genhtml_function_coverage=1 00:06:33.540 --rc genhtml_legend=1 00:06:33.540 --rc geninfo_all_blocks=1 00:06:33.540 --rc geninfo_unexecuted_blocks=1 00:06:33.540 00:06:33.540 ' 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:33.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.540 --rc genhtml_branch_coverage=1 00:06:33.540 --rc genhtml_function_coverage=1 00:06:33.540 --rc genhtml_legend=1 00:06:33.540 --rc geninfo_all_blocks=1 00:06:33.540 --rc geninfo_unexecuted_blocks=1 00:06:33.540 00:06:33.540 ' 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:33.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.540 --rc genhtml_branch_coverage=1 00:06:33.540 --rc genhtml_function_coverage=1 00:06:33.540 --rc genhtml_legend=1 00:06:33.540 --rc geninfo_all_blocks=1 00:06:33.540 --rc geninfo_unexecuted_blocks=1 00:06:33.540 00:06:33.540 ' 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:33.540 ************************************ 00:06:33.540 START TEST dd_malloc_copy 00:06:33.540 ************************************ 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1127 -- # malloc_copy 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:33.540 10:28:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:33.540 [2024-11-12 10:28:22.191767] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:33.540 [2024-11-12 10:28:22.191866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60441 ] 00:06:33.540 { 00:06:33.540 "subsystems": [ 00:06:33.540 { 00:06:33.540 "subsystem": "bdev", 00:06:33.540 "config": [ 00:06:33.540 { 00:06:33.540 "params": { 00:06:33.540 "block_size": 512, 00:06:33.540 "num_blocks": 1048576, 00:06:33.540 "name": "malloc0" 00:06:33.540 }, 00:06:33.540 "method": "bdev_malloc_create" 00:06:33.540 }, 00:06:33.540 { 00:06:33.540 "params": { 00:06:33.540 "block_size": 512, 00:06:33.540 "num_blocks": 1048576, 00:06:33.540 "name": "malloc1" 00:06:33.540 }, 00:06:33.540 "method": "bdev_malloc_create" 00:06:33.540 }, 00:06:33.540 { 00:06:33.540 "method": "bdev_wait_for_examine" 00:06:33.540 } 00:06:33.540 ] 00:06:33.540 } 00:06:33.540 ] 00:06:33.540 } 00:06:33.799 [2024-11-12 10:28:22.336935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.799 [2024-11-12 10:28:22.363633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.799 [2024-11-12 10:28:22.390039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.178  [2024-11-12T10:28:24.874Z] Copying: 244/512 [MB] (244 MBps) [2024-11-12T10:28:24.874Z] Copying: 486/512 [MB] (242 MBps) [2024-11-12T10:28:25.133Z] Copying: 512/512 [MB] (average 242 MBps) 00:06:36.375 00:06:36.375 10:28:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:36.375 10:28:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:36.375 10:28:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:36.375 10:28:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.375 { 00:06:36.375 "subsystems": [ 00:06:36.375 { 00:06:36.375 "subsystem": "bdev", 00:06:36.375 "config": [ 00:06:36.375 { 00:06:36.375 "params": { 00:06:36.375 "block_size": 512, 00:06:36.375 "num_blocks": 1048576, 00:06:36.375 "name": "malloc0" 00:06:36.375 }, 00:06:36.375 "method": "bdev_malloc_create" 00:06:36.375 }, 00:06:36.375 { 00:06:36.375 "params": { 00:06:36.375 "block_size": 512, 00:06:36.375 "num_blocks": 1048576, 00:06:36.375 "name": "malloc1" 00:06:36.375 }, 00:06:36.375 "method": "bdev_malloc_create" 00:06:36.375 }, 00:06:36.375 { 00:06:36.375 "method": "bdev_wait_for_examine" 00:06:36.375 } 00:06:36.375 ] 00:06:36.375 } 00:06:36.375 ] 00:06:36.375 } 00:06:36.375 [2024-11-12 10:28:25.045158] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:36.375 [2024-11-12 10:28:25.045297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60483 ] 00:06:36.635 [2024-11-12 10:28:25.188007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.635 [2024-11-12 10:28:25.217564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.635 [2024-11-12 10:28:25.246372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.015  [2024-11-12T10:28:27.710Z] Copying: 239/512 [MB] (239 MBps) [2024-11-12T10:28:27.710Z] Copying: 478/512 [MB] (238 MBps) [2024-11-12T10:28:27.970Z] Copying: 512/512 [MB] (average 238 MBps) 00:06:39.212 00:06:39.212 00:06:39.212 real 0m5.737s 00:06:39.212 user 0m5.140s 00:06:39.212 sys 0m0.444s 00:06:39.212 10:28:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.212 10:28:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:39.212 ************************************ 00:06:39.212 END TEST dd_malloc_copy 00:06:39.212 ************************************ 00:06:39.212 00:06:39.212 real 0m5.976s 00:06:39.212 user 0m5.278s 00:06:39.212 sys 0m0.549s 00:06:39.212 10:28:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.212 10:28:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:39.212 ************************************ 00:06:39.212 END TEST spdk_dd_malloc 00:06:39.212 ************************************ 00:06:39.212 10:28:27 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:39.212 10:28:27 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:39.212 10:28:27 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.212 10:28:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:39.212 ************************************ 00:06:39.212 START TEST spdk_dd_bdev_to_bdev 00:06:39.212 ************************************ 00:06:39.212 10:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:39.471 * Looking for test storage... 00:06:39.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:39.471 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:39.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.472 --rc genhtml_branch_coverage=1 00:06:39.472 --rc genhtml_function_coverage=1 00:06:39.472 --rc genhtml_legend=1 00:06:39.472 --rc geninfo_all_blocks=1 00:06:39.472 --rc geninfo_unexecuted_blocks=1 00:06:39.472 00:06:39.472 ' 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:39.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.472 --rc genhtml_branch_coverage=1 00:06:39.472 --rc genhtml_function_coverage=1 00:06:39.472 --rc genhtml_legend=1 00:06:39.472 --rc geninfo_all_blocks=1 00:06:39.472 --rc geninfo_unexecuted_blocks=1 00:06:39.472 00:06:39.472 ' 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:39.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.472 --rc genhtml_branch_coverage=1 00:06:39.472 --rc genhtml_function_coverage=1 00:06:39.472 --rc genhtml_legend=1 00:06:39.472 --rc geninfo_all_blocks=1 00:06:39.472 --rc geninfo_unexecuted_blocks=1 00:06:39.472 00:06:39.472 ' 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:39.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.472 --rc genhtml_branch_coverage=1 00:06:39.472 --rc genhtml_function_coverage=1 00:06:39.472 --rc genhtml_legend=1 00:06:39.472 --rc geninfo_all_blocks=1 00:06:39.472 --rc geninfo_unexecuted_blocks=1 00:06:39.472 00:06:39.472 ' 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:39.472 ************************************ 00:06:39.472 START TEST dd_inflate_file 00:06:39.472 ************************************ 00:06:39.472 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:39.472 [2024-11-12 10:28:28.214900] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:39.472 [2024-11-12 10:28:28.215162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60593 ] 00:06:39.731 [2024-11-12 10:28:28.360246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.731 [2024-11-12 10:28:28.391740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.731 [2024-11-12 10:28:28.420155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.731  [2024-11-12T10:28:28.748Z] Copying: 64/64 [MB] (average 1560 MBps) 00:06:39.990 00:06:39.990 00:06:39.990 real 0m0.449s 00:06:39.990 user 0m0.261s 00:06:39.990 sys 0m0.210s 00:06:39.990 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.990 ************************************ 00:06:39.990 END TEST dd_inflate_file 00:06:39.990 ************************************ 00:06:39.990 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:39.990 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:39.990 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:39.990 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:39.991 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:39.991 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:39.991 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:39.991 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.991 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:39.991 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:39.991 ************************************ 00:06:39.991 START TEST dd_copy_to_out_bdev 00:06:39.991 ************************************ 00:06:39.991 10:28:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:39.991 [2024-11-12 10:28:28.721250] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:39.991 { 00:06:39.991 "subsystems": [ 00:06:39.991 { 00:06:39.991 "subsystem": "bdev", 00:06:39.991 "config": [ 00:06:39.991 { 00:06:39.991 "params": { 00:06:39.991 "trtype": "pcie", 00:06:39.991 "traddr": "0000:00:10.0", 00:06:39.991 "name": "Nvme0" 00:06:39.991 }, 00:06:39.991 "method": "bdev_nvme_attach_controller" 00:06:39.991 }, 00:06:39.991 { 00:06:39.991 "params": { 00:06:39.991 "trtype": "pcie", 00:06:39.991 "traddr": "0000:00:11.0", 00:06:39.991 "name": "Nvme1" 00:06:39.991 }, 00:06:39.991 "method": "bdev_nvme_attach_controller" 00:06:39.991 }, 00:06:39.991 { 00:06:39.991 "method": "bdev_wait_for_examine" 00:06:39.991 } 00:06:39.991 ] 00:06:39.991 } 00:06:39.991 ] 00:06:39.991 } 00:06:39.991 [2024-11-12 10:28:28.721936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60627 ] 00:06:40.250 [2024-11-12 10:28:28.865732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.250 [2024-11-12 10:28:28.893738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.250 [2024-11-12 10:28:28.920474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.627  [2024-11-12T10:28:30.385Z] Copying: 52/64 [MB] (52 MBps) [2024-11-12T10:28:30.644Z] Copying: 64/64 [MB] (average 52 MBps) 00:06:41.886 00:06:41.886 ************************************ 00:06:41.886 END TEST dd_copy_to_out_bdev 00:06:41.886 ************************************ 00:06:41.886 00:06:41.886 real 0m1.785s 00:06:41.886 user 0m1.618s 00:06:41.886 sys 0m1.462s 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:41.886 ************************************ 00:06:41.886 START TEST dd_offset_magic 00:06:41.886 ************************************ 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1127 -- # offset_magic 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:41.886 10:28:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:41.886 [2024-11-12 10:28:30.564469] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:41.887 [2024-11-12 10:28:30.564573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60666 ] 00:06:41.887 { 00:06:41.887 "subsystems": [ 00:06:41.887 { 00:06:41.887 "subsystem": "bdev", 00:06:41.887 "config": [ 00:06:41.887 { 00:06:41.887 "params": { 00:06:41.887 "trtype": "pcie", 00:06:41.887 "traddr": "0000:00:10.0", 00:06:41.887 "name": "Nvme0" 00:06:41.887 }, 00:06:41.887 "method": "bdev_nvme_attach_controller" 00:06:41.887 }, 00:06:41.887 { 00:06:41.887 "params": { 00:06:41.887 "trtype": "pcie", 00:06:41.887 "traddr": "0000:00:11.0", 00:06:41.887 "name": "Nvme1" 00:06:41.887 }, 00:06:41.887 "method": "bdev_nvme_attach_controller" 00:06:41.887 }, 00:06:41.887 { 00:06:41.887 "method": "bdev_wait_for_examine" 00:06:41.887 } 00:06:41.887 ] 00:06:41.887 } 00:06:41.887 ] 00:06:41.887 } 00:06:42.145 [2024-11-12 10:28:30.705159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.145 [2024-11-12 10:28:30.731691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.145 [2024-11-12 10:28:30.758099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.404  [2024-11-12T10:28:31.163Z] Copying: 65/65 [MB] (average 984 MBps) 00:06:42.405 00:06:42.405 10:28:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:42.405 10:28:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:42.405 10:28:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:42.405 10:28:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:42.664 [2024-11-12 10:28:31.199985] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:42.664 [2024-11-12 10:28:31.200089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60686 ] 00:06:42.664 { 00:06:42.664 "subsystems": [ 00:06:42.664 { 00:06:42.664 "subsystem": "bdev", 00:06:42.664 "config": [ 00:06:42.664 { 00:06:42.664 "params": { 00:06:42.664 "trtype": "pcie", 00:06:42.664 "traddr": "0000:00:10.0", 00:06:42.664 "name": "Nvme0" 00:06:42.664 }, 00:06:42.664 "method": "bdev_nvme_attach_controller" 00:06:42.664 }, 00:06:42.664 { 00:06:42.664 "params": { 00:06:42.664 "trtype": "pcie", 00:06:42.664 "traddr": "0000:00:11.0", 00:06:42.664 "name": "Nvme1" 00:06:42.664 }, 00:06:42.664 "method": "bdev_nvme_attach_controller" 00:06:42.664 }, 00:06:42.664 { 00:06:42.664 "method": "bdev_wait_for_examine" 00:06:42.664 } 00:06:42.664 ] 00:06:42.664 } 00:06:42.664 ] 00:06:42.664 } 00:06:42.664 [2024-11-12 10:28:31.341343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.664 [2024-11-12 10:28:31.373411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.664 [2024-11-12 10:28:31.401893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.923  [2024-11-12T10:28:31.681Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:42.923 00:06:43.182 10:28:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:43.182 10:28:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:43.182 10:28:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:43.182 10:28:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:43.182 10:28:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:43.182 10:28:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:43.182 10:28:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:43.182 [2024-11-12 10:28:31.741102] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:43.182 [2024-11-12 10:28:31.741712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60703 ] 00:06:43.182 { 00:06:43.182 "subsystems": [ 00:06:43.182 { 00:06:43.182 "subsystem": "bdev", 00:06:43.182 "config": [ 00:06:43.182 { 00:06:43.182 "params": { 00:06:43.182 "trtype": "pcie", 00:06:43.182 "traddr": "0000:00:10.0", 00:06:43.182 "name": "Nvme0" 00:06:43.182 }, 00:06:43.182 "method": "bdev_nvme_attach_controller" 00:06:43.182 }, 00:06:43.182 { 00:06:43.182 "params": { 00:06:43.182 "trtype": "pcie", 00:06:43.182 "traddr": "0000:00:11.0", 00:06:43.182 "name": "Nvme1" 00:06:43.182 }, 00:06:43.182 "method": "bdev_nvme_attach_controller" 00:06:43.182 }, 00:06:43.182 { 00:06:43.182 "method": "bdev_wait_for_examine" 00:06:43.182 } 00:06:43.182 ] 00:06:43.182 } 00:06:43.182 ] 00:06:43.182 } 00:06:43.182 [2024-11-12 10:28:31.885289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.182 [2024-11-12 10:28:31.912126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.441 [2024-11-12 10:28:31.940152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.441  [2024-11-12T10:28:32.458Z] Copying: 65/65 [MB] (average 1048 MBps) 00:06:43.700 00:06:43.700 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:43.700 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:43.700 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:43.700 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:43.700 [2024-11-12 10:28:32.357186] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:43.700 [2024-11-12 10:28:32.357312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60717 ] 00:06:43.700 { 00:06:43.700 "subsystems": [ 00:06:43.700 { 00:06:43.700 "subsystem": "bdev", 00:06:43.700 "config": [ 00:06:43.700 { 00:06:43.700 "params": { 00:06:43.700 "trtype": "pcie", 00:06:43.700 "traddr": "0000:00:10.0", 00:06:43.700 "name": "Nvme0" 00:06:43.700 }, 00:06:43.700 "method": "bdev_nvme_attach_controller" 00:06:43.700 }, 00:06:43.700 { 00:06:43.700 "params": { 00:06:43.700 "trtype": "pcie", 00:06:43.700 "traddr": "0000:00:11.0", 00:06:43.700 "name": "Nvme1" 00:06:43.700 }, 00:06:43.700 "method": "bdev_nvme_attach_controller" 00:06:43.700 }, 00:06:43.700 { 00:06:43.700 "method": "bdev_wait_for_examine" 00:06:43.700 } 00:06:43.701 ] 00:06:43.701 } 00:06:43.701 ] 00:06:43.701 } 00:06:43.959 [2024-11-12 10:28:32.495668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.959 [2024-11-12 10:28:32.523910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.959 [2024-11-12 10:28:32.550925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.959  [2024-11-12T10:28:32.977Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:44.219 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:44.219 00:06:44.219 real 0m2.314s 00:06:44.219 user 0m1.723s 00:06:44.219 sys 0m0.582s 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:44.219 ************************************ 00:06:44.219 END TEST dd_offset_magic 00:06:44.219 ************************************ 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:44.219 10:28:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:44.219 [2024-11-12 10:28:32.925084] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:44.219 [2024-11-12 10:28:32.925216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60749 ] 00:06:44.219 { 00:06:44.219 "subsystems": [ 00:06:44.219 { 00:06:44.219 "subsystem": "bdev", 00:06:44.219 "config": [ 00:06:44.219 { 00:06:44.219 "params": { 00:06:44.219 "trtype": "pcie", 00:06:44.219 "traddr": "0000:00:10.0", 00:06:44.219 "name": "Nvme0" 00:06:44.219 }, 00:06:44.219 "method": "bdev_nvme_attach_controller" 00:06:44.219 }, 00:06:44.219 { 00:06:44.219 "params": { 00:06:44.219 "trtype": "pcie", 00:06:44.219 "traddr": "0000:00:11.0", 00:06:44.219 "name": "Nvme1" 00:06:44.219 }, 00:06:44.219 "method": "bdev_nvme_attach_controller" 00:06:44.219 }, 00:06:44.219 { 00:06:44.219 "method": "bdev_wait_for_examine" 00:06:44.219 } 00:06:44.219 ] 00:06:44.219 } 00:06:44.219 ] 00:06:44.219 } 00:06:44.479 [2024-11-12 10:28:33.073548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.479 [2024-11-12 10:28:33.110239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.479 [2024-11-12 10:28:33.142863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.738  [2024-11-12T10:28:33.496Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:06:44.738 00:06:44.738 10:28:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:44.738 10:28:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:44.738 10:28:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.738 10:28:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:44.738 10:28:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:44.738 10:28:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:44.738 10:28:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:44.738 10:28:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:44.738 10:28:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:44.738 10:28:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:44.997 [2024-11-12 10:28:33.499215] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:44.997 [2024-11-12 10:28:33.499787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60764 ] 00:06:44.997 { 00:06:44.997 "subsystems": [ 00:06:44.997 { 00:06:44.997 "subsystem": "bdev", 00:06:44.997 "config": [ 00:06:44.997 { 00:06:44.997 "params": { 00:06:44.997 "trtype": "pcie", 00:06:44.997 "traddr": "0000:00:10.0", 00:06:44.997 "name": "Nvme0" 00:06:44.997 }, 00:06:44.997 "method": "bdev_nvme_attach_controller" 00:06:44.997 }, 00:06:44.997 { 00:06:44.997 "params": { 00:06:44.997 "trtype": "pcie", 00:06:44.997 "traddr": "0000:00:11.0", 00:06:44.997 "name": "Nvme1" 00:06:44.997 }, 00:06:44.997 "method": "bdev_nvme_attach_controller" 00:06:44.997 }, 00:06:44.997 { 00:06:44.997 "method": "bdev_wait_for_examine" 00:06:44.997 } 00:06:44.997 ] 00:06:44.997 } 00:06:44.997 ] 00:06:44.997 } 00:06:44.997 [2024-11-12 10:28:33.645646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.997 [2024-11-12 10:28:33.682681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.998 [2024-11-12 10:28:33.715766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.257  [2024-11-12T10:28:34.015Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:06:45.257 00:06:45.257 10:28:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:45.516 00:06:45.516 real 0m6.060s 00:06:45.516 user 0m4.604s 00:06:45.516 sys 0m2.799s 00:06:45.516 10:28:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.516 ************************************ 00:06:45.516 END TEST spdk_dd_bdev_to_bdev 00:06:45.516 ************************************ 00:06:45.516 10:28:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:45.516 10:28:34 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:45.516 10:28:34 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:45.516 10:28:34 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.516 10:28:34 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.516 10:28:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:45.516 ************************************ 00:06:45.516 START TEST spdk_dd_uring 00:06:45.516 ************************************ 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:45.516 * Looking for test storage... 00:06:45.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.516 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.516 --rc genhtml_branch_coverage=1 00:06:45.517 --rc genhtml_function_coverage=1 00:06:45.517 --rc genhtml_legend=1 00:06:45.517 --rc geninfo_all_blocks=1 00:06:45.517 --rc geninfo_unexecuted_blocks=1 00:06:45.517 00:06:45.517 ' 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.517 --rc genhtml_branch_coverage=1 00:06:45.517 --rc genhtml_function_coverage=1 00:06:45.517 --rc genhtml_legend=1 00:06:45.517 --rc geninfo_all_blocks=1 00:06:45.517 --rc geninfo_unexecuted_blocks=1 00:06:45.517 00:06:45.517 ' 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.517 --rc genhtml_branch_coverage=1 00:06:45.517 --rc genhtml_function_coverage=1 00:06:45.517 --rc genhtml_legend=1 00:06:45.517 --rc geninfo_all_blocks=1 00:06:45.517 --rc geninfo_unexecuted_blocks=1 00:06:45.517 00:06:45.517 ' 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.517 --rc genhtml_branch_coverage=1 00:06:45.517 --rc genhtml_function_coverage=1 00:06:45.517 --rc genhtml_legend=1 00:06:45.517 --rc geninfo_all_blocks=1 00:06:45.517 --rc geninfo_unexecuted_blocks=1 00:06:45.517 00:06:45.517 ' 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:45.517 ************************************ 00:06:45.517 START TEST dd_uring_copy 00:06:45.517 ************************************ 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1127 -- # uring_zram_copy 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:45.517 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=bxxi9ygefb07egbuy1hfd6uy6c9k0kf5m79c5mik3wm3eo4rd6c0fahsm8ltbvs252d0gp44q1mot1l9immngkgesbrb03o7vgbiefmlg35dk98vft6hch7qdx5u0gfmd46075ujphz2s4mhin9niu7nxdlowgctzya4d0vlkukpk4v06a01x6inuof5pldim6gq6jatnebr2wpl7p3kbks2h5bc48rvqzm4abhitdoiwlhlvpxfzol4u5qifu14dig0wqplfmf8www1aj2q5cr9dmzk6518tdof48bueuwpz64krk71x6we3vf2vur5a5t95cbdkb8wogj5l4lmr0l3olftw7yxb3yxu5s9fgu20qr8pdett480zhr0cbsx9492tsw3j0bd02dq52rj5gkch2z6j4gc3cnrkal5rakm56g6oezyu90iaaayz2l6dwlu23lfq8cnwerbrt2mv8kvj4rzblztpd8b11239rzljzaelt16cfz8iva0tp0utpl5ecr5a1cieoqv0be1h9567rqhlg0vllnitw1wbmqzegvc5192pnshkqxk75mkpmxwvwm5x53x34l7x3hi9g6mk5osso1441yugqmyxpok7gqtd3rqab2wneoiv09qu075znwa2lle5zxnmpa2mskzj81j2hx9h6jlkwgh18b3ibt4ccka09eglkf7lmbmd0kxtqhsfw5q59ydovkuo5poadwxxjgrcg9a541vixs6zzht5y33ihevucz3as0fey3eev0dphmpip03tetxhlsi7u2ncq2x5hzowl4agb59u8ch3lfskflnye3k9ki00te1dsx4izq8er25dfwy093d7qeryirjt33ea7x5fe769rmq2pi0kpatvpn7atg0wrssdiqgw7xmsysmg9d9g64p36qvju5l6yax5jd3f8c0j9164klj1afhq8arzo9ixrgmb83sz0s2yi0yf4w1075f3a7b5rp2u4aypg8n44xvxxsg2d55yl64ezft2j0i 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo bxxi9ygefb07egbuy1hfd6uy6c9k0kf5m79c5mik3wm3eo4rd6c0fahsm8ltbvs252d0gp44q1mot1l9immngkgesbrb03o7vgbiefmlg35dk98vft6hch7qdx5u0gfmd46075ujphz2s4mhin9niu7nxdlowgctzya4d0vlkukpk4v06a01x6inuof5pldim6gq6jatnebr2wpl7p3kbks2h5bc48rvqzm4abhitdoiwlhlvpxfzol4u5qifu14dig0wqplfmf8www1aj2q5cr9dmzk6518tdof48bueuwpz64krk71x6we3vf2vur5a5t95cbdkb8wogj5l4lmr0l3olftw7yxb3yxu5s9fgu20qr8pdett480zhr0cbsx9492tsw3j0bd02dq52rj5gkch2z6j4gc3cnrkal5rakm56g6oezyu90iaaayz2l6dwlu23lfq8cnwerbrt2mv8kvj4rzblztpd8b11239rzljzaelt16cfz8iva0tp0utpl5ecr5a1cieoqv0be1h9567rqhlg0vllnitw1wbmqzegvc5192pnshkqxk75mkpmxwvwm5x53x34l7x3hi9g6mk5osso1441yugqmyxpok7gqtd3rqab2wneoiv09qu075znwa2lle5zxnmpa2mskzj81j2hx9h6jlkwgh18b3ibt4ccka09eglkf7lmbmd0kxtqhsfw5q59ydovkuo5poadwxxjgrcg9a541vixs6zzht5y33ihevucz3as0fey3eev0dphmpip03tetxhlsi7u2ncq2x5hzowl4agb59u8ch3lfskflnye3k9ki00te1dsx4izq8er25dfwy093d7qeryirjt33ea7x5fe769rmq2pi0kpatvpn7atg0wrssdiqgw7xmsysmg9d9g64p36qvju5l6yax5jd3f8c0j9164klj1afhq8arzo9ixrgmb83sz0s2yi0yf4w1075f3a7b5rp2u4aypg8n44xvxxsg2d55yl64ezft2j0i 00:06:45.777 10:28:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:45.777 [2024-11-12 10:28:34.361205] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:45.777 [2024-11-12 10:28:34.361306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60843 ] 00:06:45.777 [2024-11-12 10:28:34.501144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.777 [2024-11-12 10:28:34.529614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.037 [2024-11-12 10:28:34.557372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.606  [2024-11-12T10:28:35.364Z] Copying: 511/511 [MB] (average 1350 MBps) 00:06:46.606 00:06:46.606 10:28:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:46.606 10:28:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:46.606 10:28:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:46.606 10:28:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:46.606 [2024-11-12 10:28:35.322555] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:46.606 [2024-11-12 10:28:35.322660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60862 ] 00:06:46.606 { 00:06:46.606 "subsystems": [ 00:06:46.606 { 00:06:46.606 "subsystem": "bdev", 00:06:46.606 "config": [ 00:06:46.606 { 00:06:46.606 "params": { 00:06:46.606 "block_size": 512, 00:06:46.606 "num_blocks": 1048576, 00:06:46.606 "name": "malloc0" 00:06:46.606 }, 00:06:46.606 "method": "bdev_malloc_create" 00:06:46.606 }, 00:06:46.606 { 00:06:46.606 "params": { 00:06:46.606 "filename": "/dev/zram1", 00:06:46.606 "name": "uring0" 00:06:46.606 }, 00:06:46.606 "method": "bdev_uring_create" 00:06:46.606 }, 00:06:46.606 { 00:06:46.606 "method": "bdev_wait_for_examine" 00:06:46.606 } 00:06:46.606 ] 00:06:46.606 } 00:06:46.606 ] 00:06:46.606 } 00:06:46.865 [2024-11-12 10:28:35.465354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.865 [2024-11-12 10:28:35.494400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.865 [2024-11-12 10:28:35.521163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.245  [2024-11-12T10:28:37.940Z] Copying: 223/512 [MB] (223 MBps) [2024-11-12T10:28:37.940Z] Copying: 457/512 [MB] (233 MBps) [2024-11-12T10:28:38.199Z] Copying: 512/512 [MB] (average 230 MBps) 00:06:49.441 00:06:49.441 10:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:49.441 10:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:49.441 10:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:49.441 10:28:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:49.441 { 00:06:49.441 "subsystems": [ 00:06:49.441 { 00:06:49.441 "subsystem": "bdev", 00:06:49.441 "config": [ 00:06:49.441 { 00:06:49.441 "params": { 00:06:49.441 "block_size": 512, 00:06:49.441 "num_blocks": 1048576, 00:06:49.441 "name": "malloc0" 00:06:49.441 }, 00:06:49.441 "method": "bdev_malloc_create" 00:06:49.441 }, 00:06:49.441 { 00:06:49.441 "params": { 00:06:49.441 "filename": "/dev/zram1", 00:06:49.441 "name": "uring0" 00:06:49.441 }, 00:06:49.441 "method": "bdev_uring_create" 00:06:49.441 }, 00:06:49.441 { 00:06:49.441 "method": "bdev_wait_for_examine" 00:06:49.441 } 00:06:49.441 ] 00:06:49.441 } 00:06:49.441 ] 00:06:49.441 } 00:06:49.441 [2024-11-12 10:28:38.120102] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:49.442 [2024-11-12 10:28:38.120877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60901 ] 00:06:49.701 [2024-11-12 10:28:38.267098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.701 [2024-11-12 10:28:38.294061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.701 [2024-11-12 10:28:38.320709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.678  [2024-11-12T10:28:40.818Z] Copying: 172/512 [MB] (172 MBps) [2024-11-12T10:28:41.755Z] Copying: 337/512 [MB] (165 MBps) [2024-11-12T10:28:41.755Z] Copying: 512/512 [MB] (average 171 MBps) 00:06:52.997 00:06:52.997 10:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:52.997 10:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ bxxi9ygefb07egbuy1hfd6uy6c9k0kf5m79c5mik3wm3eo4rd6c0fahsm8ltbvs252d0gp44q1mot1l9immngkgesbrb03o7vgbiefmlg35dk98vft6hch7qdx5u0gfmd46075ujphz2s4mhin9niu7nxdlowgctzya4d0vlkukpk4v06a01x6inuof5pldim6gq6jatnebr2wpl7p3kbks2h5bc48rvqzm4abhitdoiwlhlvpxfzol4u5qifu14dig0wqplfmf8www1aj2q5cr9dmzk6518tdof48bueuwpz64krk71x6we3vf2vur5a5t95cbdkb8wogj5l4lmr0l3olftw7yxb3yxu5s9fgu20qr8pdett480zhr0cbsx9492tsw3j0bd02dq52rj5gkch2z6j4gc3cnrkal5rakm56g6oezyu90iaaayz2l6dwlu23lfq8cnwerbrt2mv8kvj4rzblztpd8b11239rzljzaelt16cfz8iva0tp0utpl5ecr5a1cieoqv0be1h9567rqhlg0vllnitw1wbmqzegvc5192pnshkqxk75mkpmxwvwm5x53x34l7x3hi9g6mk5osso1441yugqmyxpok7gqtd3rqab2wneoiv09qu075znwa2lle5zxnmpa2mskzj81j2hx9h6jlkwgh18b3ibt4ccka09eglkf7lmbmd0kxtqhsfw5q59ydovkuo5poadwxxjgrcg9a541vixs6zzht5y33ihevucz3as0fey3eev0dphmpip03tetxhlsi7u2ncq2x5hzowl4agb59u8ch3lfskflnye3k9ki00te1dsx4izq8er25dfwy093d7qeryirjt33ea7x5fe769rmq2pi0kpatvpn7atg0wrssdiqgw7xmsysmg9d9g64p36qvju5l6yax5jd3f8c0j9164klj1afhq8arzo9ixrgmb83sz0s2yi0yf4w1075f3a7b5rp2u4aypg8n44xvxxsg2d55yl64ezft2j0i == \b\x\x\i\9\y\g\e\f\b\0\7\e\g\b\u\y\1\h\f\d\6\u\y\6\c\9\k\0\k\f\5\m\7\9\c\5\m\i\k\3\w\m\3\e\o\4\r\d\6\c\0\f\a\h\s\m\8\l\t\b\v\s\2\5\2\d\0\g\p\4\4\q\1\m\o\t\1\l\9\i\m\m\n\g\k\g\e\s\b\r\b\0\3\o\7\v\g\b\i\e\f\m\l\g\3\5\d\k\9\8\v\f\t\6\h\c\h\7\q\d\x\5\u\0\g\f\m\d\4\6\0\7\5\u\j\p\h\z\2\s\4\m\h\i\n\9\n\i\u\7\n\x\d\l\o\w\g\c\t\z\y\a\4\d\0\v\l\k\u\k\p\k\4\v\0\6\a\0\1\x\6\i\n\u\o\f\5\p\l\d\i\m\6\g\q\6\j\a\t\n\e\b\r\2\w\p\l\7\p\3\k\b\k\s\2\h\5\b\c\4\8\r\v\q\z\m\4\a\b\h\i\t\d\o\i\w\l\h\l\v\p\x\f\z\o\l\4\u\5\q\i\f\u\1\4\d\i\g\0\w\q\p\l\f\m\f\8\w\w\w\1\a\j\2\q\5\c\r\9\d\m\z\k\6\5\1\8\t\d\o\f\4\8\b\u\e\u\w\p\z\6\4\k\r\k\7\1\x\6\w\e\3\v\f\2\v\u\r\5\a\5\t\9\5\c\b\d\k\b\8\w\o\g\j\5\l\4\l\m\r\0\l\3\o\l\f\t\w\7\y\x\b\3\y\x\u\5\s\9\f\g\u\2\0\q\r\8\p\d\e\t\t\4\8\0\z\h\r\0\c\b\s\x\9\4\9\2\t\s\w\3\j\0\b\d\0\2\d\q\5\2\r\j\5\g\k\c\h\2\z\6\j\4\g\c\3\c\n\r\k\a\l\5\r\a\k\m\5\6\g\6\o\e\z\y\u\9\0\i\a\a\a\y\z\2\l\6\d\w\l\u\2\3\l\f\q\8\c\n\w\e\r\b\r\t\2\m\v\8\k\v\j\4\r\z\b\l\z\t\p\d\8\b\1\1\2\3\9\r\z\l\j\z\a\e\l\t\1\6\c\f\z\8\i\v\a\0\t\p\0\u\t\p\l\5\e\c\r\5\a\1\c\i\e\o\q\v\0\b\e\1\h\9\5\6\7\r\q\h\l\g\0\v\l\l\n\i\t\w\1\w\b\m\q\z\e\g\v\c\5\1\9\2\p\n\s\h\k\q\x\k\7\5\m\k\p\m\x\w\v\w\m\5\x\5\3\x\3\4\l\7\x\3\h\i\9\g\6\m\k\5\o\s\s\o\1\4\4\1\y\u\g\q\m\y\x\p\o\k\7\g\q\t\d\3\r\q\a\b\2\w\n\e\o\i\v\0\9\q\u\0\7\5\z\n\w\a\2\l\l\e\5\z\x\n\m\p\a\2\m\s\k\z\j\8\1\j\2\h\x\9\h\6\j\l\k\w\g\h\1\8\b\3\i\b\t\4\c\c\k\a\0\9\e\g\l\k\f\7\l\m\b\m\d\0\k\x\t\q\h\s\f\w\5\q\5\9\y\d\o\v\k\u\o\5\p\o\a\d\w\x\x\j\g\r\c\g\9\a\5\4\1\v\i\x\s\6\z\z\h\t\5\y\3\3\i\h\e\v\u\c\z\3\a\s\0\f\e\y\3\e\e\v\0\d\p\h\m\p\i\p\0\3\t\e\t\x\h\l\s\i\7\u\2\n\c\q\2\x\5\h\z\o\w\l\4\a\g\b\5\9\u\8\c\h\3\l\f\s\k\f\l\n\y\e\3\k\9\k\i\0\0\t\e\1\d\s\x\4\i\z\q\8\e\r\2\5\d\f\w\y\0\9\3\d\7\q\e\r\y\i\r\j\t\3\3\e\a\7\x\5\f\e\7\6\9\r\m\q\2\p\i\0\k\p\a\t\v\p\n\7\a\t\g\0\w\r\s\s\d\i\q\g\w\7\x\m\s\y\s\m\g\9\d\9\g\6\4\p\3\6\q\v\j\u\5\l\6\y\a\x\5\j\d\3\f\8\c\0\j\9\1\6\4\k\l\j\1\a\f\h\q\8\a\r\z\o\9\i\x\r\g\m\b\8\3\s\z\0\s\2\y\i\0\y\f\4\w\1\0\7\5\f\3\a\7\b\5\r\p\2\u\4\a\y\p\g\8\n\4\4\x\v\x\x\s\g\2\d\5\5\y\l\6\4\e\z\f\t\2\j\0\i ]] 00:06:52.997 10:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:52.998 10:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ bxxi9ygefb07egbuy1hfd6uy6c9k0kf5m79c5mik3wm3eo4rd6c0fahsm8ltbvs252d0gp44q1mot1l9immngkgesbrb03o7vgbiefmlg35dk98vft6hch7qdx5u0gfmd46075ujphz2s4mhin9niu7nxdlowgctzya4d0vlkukpk4v06a01x6inuof5pldim6gq6jatnebr2wpl7p3kbks2h5bc48rvqzm4abhitdoiwlhlvpxfzol4u5qifu14dig0wqplfmf8www1aj2q5cr9dmzk6518tdof48bueuwpz64krk71x6we3vf2vur5a5t95cbdkb8wogj5l4lmr0l3olftw7yxb3yxu5s9fgu20qr8pdett480zhr0cbsx9492tsw3j0bd02dq52rj5gkch2z6j4gc3cnrkal5rakm56g6oezyu90iaaayz2l6dwlu23lfq8cnwerbrt2mv8kvj4rzblztpd8b11239rzljzaelt16cfz8iva0tp0utpl5ecr5a1cieoqv0be1h9567rqhlg0vllnitw1wbmqzegvc5192pnshkqxk75mkpmxwvwm5x53x34l7x3hi9g6mk5osso1441yugqmyxpok7gqtd3rqab2wneoiv09qu075znwa2lle5zxnmpa2mskzj81j2hx9h6jlkwgh18b3ibt4ccka09eglkf7lmbmd0kxtqhsfw5q59ydovkuo5poadwxxjgrcg9a541vixs6zzht5y33ihevucz3as0fey3eev0dphmpip03tetxhlsi7u2ncq2x5hzowl4agb59u8ch3lfskflnye3k9ki00te1dsx4izq8er25dfwy093d7qeryirjt33ea7x5fe769rmq2pi0kpatvpn7atg0wrssdiqgw7xmsysmg9d9g64p36qvju5l6yax5jd3f8c0j9164klj1afhq8arzo9ixrgmb83sz0s2yi0yf4w1075f3a7b5rp2u4aypg8n44xvxxsg2d55yl64ezft2j0i == \b\x\x\i\9\y\g\e\f\b\0\7\e\g\b\u\y\1\h\f\d\6\u\y\6\c\9\k\0\k\f\5\m\7\9\c\5\m\i\k\3\w\m\3\e\o\4\r\d\6\c\0\f\a\h\s\m\8\l\t\b\v\s\2\5\2\d\0\g\p\4\4\q\1\m\o\t\1\l\9\i\m\m\n\g\k\g\e\s\b\r\b\0\3\o\7\v\g\b\i\e\f\m\l\g\3\5\d\k\9\8\v\f\t\6\h\c\h\7\q\d\x\5\u\0\g\f\m\d\4\6\0\7\5\u\j\p\h\z\2\s\4\m\h\i\n\9\n\i\u\7\n\x\d\l\o\w\g\c\t\z\y\a\4\d\0\v\l\k\u\k\p\k\4\v\0\6\a\0\1\x\6\i\n\u\o\f\5\p\l\d\i\m\6\g\q\6\j\a\t\n\e\b\r\2\w\p\l\7\p\3\k\b\k\s\2\h\5\b\c\4\8\r\v\q\z\m\4\a\b\h\i\t\d\o\i\w\l\h\l\v\p\x\f\z\o\l\4\u\5\q\i\f\u\1\4\d\i\g\0\w\q\p\l\f\m\f\8\w\w\w\1\a\j\2\q\5\c\r\9\d\m\z\k\6\5\1\8\t\d\o\f\4\8\b\u\e\u\w\p\z\6\4\k\r\k\7\1\x\6\w\e\3\v\f\2\v\u\r\5\a\5\t\9\5\c\b\d\k\b\8\w\o\g\j\5\l\4\l\m\r\0\l\3\o\l\f\t\w\7\y\x\b\3\y\x\u\5\s\9\f\g\u\2\0\q\r\8\p\d\e\t\t\4\8\0\z\h\r\0\c\b\s\x\9\4\9\2\t\s\w\3\j\0\b\d\0\2\d\q\5\2\r\j\5\g\k\c\h\2\z\6\j\4\g\c\3\c\n\r\k\a\l\5\r\a\k\m\5\6\g\6\o\e\z\y\u\9\0\i\a\a\a\y\z\2\l\6\d\w\l\u\2\3\l\f\q\8\c\n\w\e\r\b\r\t\2\m\v\8\k\v\j\4\r\z\b\l\z\t\p\d\8\b\1\1\2\3\9\r\z\l\j\z\a\e\l\t\1\6\c\f\z\8\i\v\a\0\t\p\0\u\t\p\l\5\e\c\r\5\a\1\c\i\e\o\q\v\0\b\e\1\h\9\5\6\7\r\q\h\l\g\0\v\l\l\n\i\t\w\1\w\b\m\q\z\e\g\v\c\5\1\9\2\p\n\s\h\k\q\x\k\7\5\m\k\p\m\x\w\v\w\m\5\x\5\3\x\3\4\l\7\x\3\h\i\9\g\6\m\k\5\o\s\s\o\1\4\4\1\y\u\g\q\m\y\x\p\o\k\7\g\q\t\d\3\r\q\a\b\2\w\n\e\o\i\v\0\9\q\u\0\7\5\z\n\w\a\2\l\l\e\5\z\x\n\m\p\a\2\m\s\k\z\j\8\1\j\2\h\x\9\h\6\j\l\k\w\g\h\1\8\b\3\i\b\t\4\c\c\k\a\0\9\e\g\l\k\f\7\l\m\b\m\d\0\k\x\t\q\h\s\f\w\5\q\5\9\y\d\o\v\k\u\o\5\p\o\a\d\w\x\x\j\g\r\c\g\9\a\5\4\1\v\i\x\s\6\z\z\h\t\5\y\3\3\i\h\e\v\u\c\z\3\a\s\0\f\e\y\3\e\e\v\0\d\p\h\m\p\i\p\0\3\t\e\t\x\h\l\s\i\7\u\2\n\c\q\2\x\5\h\z\o\w\l\4\a\g\b\5\9\u\8\c\h\3\l\f\s\k\f\l\n\y\e\3\k\9\k\i\0\0\t\e\1\d\s\x\4\i\z\q\8\e\r\2\5\d\f\w\y\0\9\3\d\7\q\e\r\y\i\r\j\t\3\3\e\a\7\x\5\f\e\7\6\9\r\m\q\2\p\i\0\k\p\a\t\v\p\n\7\a\t\g\0\w\r\s\s\d\i\q\g\w\7\x\m\s\y\s\m\g\9\d\9\g\6\4\p\3\6\q\v\j\u\5\l\6\y\a\x\5\j\d\3\f\8\c\0\j\9\1\6\4\k\l\j\1\a\f\h\q\8\a\r\z\o\9\i\x\r\g\m\b\8\3\s\z\0\s\2\y\i\0\y\f\4\w\1\0\7\5\f\3\a\7\b\5\r\p\2\u\4\a\y\p\g\8\n\4\4\x\v\x\x\s\g\2\d\5\5\y\l\6\4\e\z\f\t\2\j\0\i ]] 00:06:52.998 10:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:53.257 10:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:53.257 10:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:53.257 10:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:53.257 10:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.516 [2024-11-12 10:28:42.023986] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:53.516 [2024-11-12 10:28:42.024089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60963 ] 00:06:53.516 { 00:06:53.516 "subsystems": [ 00:06:53.516 { 00:06:53.516 "subsystem": "bdev", 00:06:53.516 "config": [ 00:06:53.516 { 00:06:53.516 "params": { 00:06:53.516 "block_size": 512, 00:06:53.516 "num_blocks": 1048576, 00:06:53.516 "name": "malloc0" 00:06:53.516 }, 00:06:53.516 "method": "bdev_malloc_create" 00:06:53.516 }, 00:06:53.516 { 00:06:53.516 "params": { 00:06:53.516 "filename": "/dev/zram1", 00:06:53.516 "name": "uring0" 00:06:53.516 }, 00:06:53.516 "method": "bdev_uring_create" 00:06:53.516 }, 00:06:53.516 { 00:06:53.516 "method": "bdev_wait_for_examine" 00:06:53.516 } 00:06:53.516 ] 00:06:53.516 } 00:06:53.516 ] 00:06:53.516 } 00:06:53.516 [2024-11-12 10:28:42.170462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.516 [2024-11-12 10:28:42.205216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.516 [2024-11-12 10:28:42.237714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.894  [2024-11-12T10:28:44.589Z] Copying: 142/512 [MB] (142 MBps) [2024-11-12T10:28:45.526Z] Copying: 304/512 [MB] (161 MBps) [2024-11-12T10:28:46.094Z] Copying: 457/512 [MB] (152 MBps) [2024-11-12T10:28:46.094Z] Copying: 512/512 [MB] (average 149 MBps) 00:06:57.336 00:06:57.336 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:57.336 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:57.336 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:57.336 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:57.336 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:57.336 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:57.336 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:57.336 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.596 [2024-11-12 10:28:46.097840] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:57.596 [2024-11-12 10:28:46.097942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61024 ] 00:06:57.596 { 00:06:57.596 "subsystems": [ 00:06:57.596 { 00:06:57.596 "subsystem": "bdev", 00:06:57.596 "config": [ 00:06:57.596 { 00:06:57.596 "params": { 00:06:57.596 "block_size": 512, 00:06:57.596 "num_blocks": 1048576, 00:06:57.596 "name": "malloc0" 00:06:57.596 }, 00:06:57.596 "method": "bdev_malloc_create" 00:06:57.596 }, 00:06:57.596 { 00:06:57.596 "params": { 00:06:57.596 "filename": "/dev/zram1", 00:06:57.596 "name": "uring0" 00:06:57.596 }, 00:06:57.596 "method": "bdev_uring_create" 00:06:57.596 }, 00:06:57.596 { 00:06:57.596 "params": { 00:06:57.596 "name": "uring0" 00:06:57.596 }, 00:06:57.596 "method": "bdev_uring_delete" 00:06:57.596 }, 00:06:57.596 { 00:06:57.596 "method": "bdev_wait_for_examine" 00:06:57.596 } 00:06:57.596 ] 00:06:57.596 } 00:06:57.596 ] 00:06:57.596 } 00:06:57.596 [2024-11-12 10:28:46.241161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.596 [2024-11-12 10:28:46.270149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.596 [2024-11-12 10:28:46.297821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.855  [2024-11-12T10:28:46.872Z] Copying: 0/0 [B] (average 0 Bps) 00:06:58.114 00:06:58.114 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:58.114 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:58.114 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:58.114 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:58.114 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:58.114 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:06:58.114 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:58.115 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.115 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.115 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.115 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.115 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.115 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.115 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.115 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.115 10:28:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:58.115 [2024-11-12 10:28:46.694478] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:58.115 [2024-11-12 10:28:46.694566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61048 ] 00:06:58.115 { 00:06:58.115 "subsystems": [ 00:06:58.115 { 00:06:58.115 "subsystem": "bdev", 00:06:58.115 "config": [ 00:06:58.115 { 00:06:58.115 "params": { 00:06:58.115 "block_size": 512, 00:06:58.115 "num_blocks": 1048576, 00:06:58.115 "name": "malloc0" 00:06:58.115 }, 00:06:58.115 "method": "bdev_malloc_create" 00:06:58.115 }, 00:06:58.115 { 00:06:58.115 "params": { 00:06:58.115 "filename": "/dev/zram1", 00:06:58.115 "name": "uring0" 00:06:58.115 }, 00:06:58.115 "method": "bdev_uring_create" 00:06:58.115 }, 00:06:58.115 { 00:06:58.115 "params": { 00:06:58.115 "name": "uring0" 00:06:58.115 }, 00:06:58.115 "method": "bdev_uring_delete" 00:06:58.115 }, 00:06:58.115 { 00:06:58.115 "method": "bdev_wait_for_examine" 00:06:58.115 } 00:06:58.115 ] 00:06:58.115 } 00:06:58.115 ] 00:06:58.115 } 00:06:58.115 [2024-11-12 10:28:46.841929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.115 [2024-11-12 10:28:46.871976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.374 [2024-11-12 10:28:46.900158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.374 [2024-11-12 10:28:47.019907] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:58.374 [2024-11-12 10:28:47.019961] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:58.374 [2024-11-12 10:28:47.019971] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:58.374 [2024-11-12 10:28:47.019979] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.634 [2024-11-12 10:28:47.176416] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:58.634 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:06:58.634 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.634 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:06:58.634 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:06:58.634 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:06:58.634 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.634 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:58.634 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:58.634 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:58.634 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:58.634 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:58.634 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:58.893 00:06:58.893 real 0m13.204s 00:06:58.893 user 0m9.030s 00:06:58.893 sys 0m11.377s 00:06:58.893 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:58.893 10:28:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:58.893 ************************************ 00:06:58.893 END TEST dd_uring_copy 00:06:58.893 ************************************ 00:06:58.893 00:06:58.893 real 0m13.448s 00:06:58.893 user 0m9.158s 00:06:58.893 sys 0m11.498s 00:06:58.893 10:28:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:58.893 10:28:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:58.893 ************************************ 00:06:58.893 END TEST spdk_dd_uring 00:06:58.893 ************************************ 00:06:58.893 10:28:47 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:58.893 10:28:47 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:58.893 10:28:47 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.893 10:28:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:58.893 ************************************ 00:06:58.893 START TEST spdk_dd_sparse 00:06:58.893 ************************************ 00:06:58.893 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:58.893 * Looking for test storage... 00:06:59.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:59.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.153 --rc genhtml_branch_coverage=1 00:06:59.153 --rc genhtml_function_coverage=1 00:06:59.153 --rc genhtml_legend=1 00:06:59.153 --rc geninfo_all_blocks=1 00:06:59.153 --rc geninfo_unexecuted_blocks=1 00:06:59.153 00:06:59.153 ' 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:59.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.153 --rc genhtml_branch_coverage=1 00:06:59.153 --rc genhtml_function_coverage=1 00:06:59.153 --rc genhtml_legend=1 00:06:59.153 --rc geninfo_all_blocks=1 00:06:59.153 --rc geninfo_unexecuted_blocks=1 00:06:59.153 00:06:59.153 ' 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:59.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.153 --rc genhtml_branch_coverage=1 00:06:59.153 --rc genhtml_function_coverage=1 00:06:59.153 --rc genhtml_legend=1 00:06:59.153 --rc geninfo_all_blocks=1 00:06:59.153 --rc geninfo_unexecuted_blocks=1 00:06:59.153 00:06:59.153 ' 00:06:59.153 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:59.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.153 --rc genhtml_branch_coverage=1 00:06:59.153 --rc genhtml_function_coverage=1 00:06:59.153 --rc genhtml_legend=1 00:06:59.153 --rc geninfo_all_blocks=1 00:06:59.153 --rc geninfo_unexecuted_blocks=1 00:06:59.153 00:06:59.153 ' 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:59.154 1+0 records in 00:06:59.154 1+0 records out 00:06:59.154 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00627676 s, 668 MB/s 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:59.154 1+0 records in 00:06:59.154 1+0 records out 00:06:59.154 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00640565 s, 655 MB/s 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:59.154 1+0 records in 00:06:59.154 1+0 records out 00:06:59.154 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00422021 s, 994 MB/s 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:59.154 ************************************ 00:06:59.154 START TEST dd_sparse_file_to_file 00:06:59.154 ************************************ 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1127 -- # file_to_file 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:59.154 10:28:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:59.154 { 00:06:59.154 "subsystems": [ 00:06:59.154 { 00:06:59.154 "subsystem": "bdev", 00:06:59.154 "config": [ 00:06:59.154 { 00:06:59.154 "params": { 00:06:59.154 "block_size": 4096, 00:06:59.154 "filename": "dd_sparse_aio_disk", 00:06:59.154 "name": "dd_aio" 00:06:59.154 }, 00:06:59.154 "method": "bdev_aio_create" 00:06:59.154 }, 00:06:59.154 { 00:06:59.154 "params": { 00:06:59.154 "lvs_name": "dd_lvstore", 00:06:59.154 "bdev_name": "dd_aio" 00:06:59.154 }, 00:06:59.154 "method": "bdev_lvol_create_lvstore" 00:06:59.154 }, 00:06:59.154 { 00:06:59.154 "method": "bdev_wait_for_examine" 00:06:59.154 } 00:06:59.154 ] 00:06:59.154 } 00:06:59.154 ] 00:06:59.154 } 00:06:59.154 [2024-11-12 10:28:47.885895] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:59.154 [2024-11-12 10:28:47.885995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61141 ] 00:06:59.413 [2024-11-12 10:28:48.033152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.413 [2024-11-12 10:28:48.069448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.413 [2024-11-12 10:28:48.104167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.672  [2024-11-12T10:28:48.430Z] Copying: 12/36 [MB] (average 1090 MBps) 00:06:59.672 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:59.672 00:06:59.672 real 0m0.544s 00:06:59.672 user 0m0.324s 00:06:59.672 sys 0m0.261s 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:59.672 ************************************ 00:06:59.672 END TEST dd_sparse_file_to_file 00:06:59.672 ************************************ 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.672 10:28:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:59.672 ************************************ 00:06:59.672 START TEST dd_sparse_file_to_bdev 00:06:59.672 ************************************ 00:06:59.673 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1127 -- # file_to_bdev 00:06:59.673 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:59.673 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:59.673 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:59.673 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:59.673 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:59.673 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:59.673 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:59.673 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:59.932 [2024-11-12 10:28:48.474382] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:06:59.932 [2024-11-12 10:28:48.474491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61184 ] 00:06:59.932 { 00:06:59.932 "subsystems": [ 00:06:59.932 { 00:06:59.932 "subsystem": "bdev", 00:06:59.932 "config": [ 00:06:59.932 { 00:06:59.932 "params": { 00:06:59.932 "block_size": 4096, 00:06:59.932 "filename": "dd_sparse_aio_disk", 00:06:59.932 "name": "dd_aio" 00:06:59.932 }, 00:06:59.932 "method": "bdev_aio_create" 00:06:59.932 }, 00:06:59.932 { 00:06:59.932 "params": { 00:06:59.932 "lvs_name": "dd_lvstore", 00:06:59.932 "lvol_name": "dd_lvol", 00:06:59.932 "size_in_mib": 36, 00:06:59.932 "thin_provision": true 00:06:59.932 }, 00:06:59.932 "method": "bdev_lvol_create" 00:06:59.932 }, 00:06:59.932 { 00:06:59.932 "method": "bdev_wait_for_examine" 00:06:59.932 } 00:06:59.932 ] 00:06:59.932 } 00:06:59.932 ] 00:06:59.932 } 00:06:59.932 [2024-11-12 10:28:48.618777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.932 [2024-11-12 10:28:48.647079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.932 [2024-11-12 10:28:48.674216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.191  [2024-11-12T10:28:48.949Z] Copying: 12/36 [MB] (average 545 MBps) 00:07:00.191 00:07:00.191 00:07:00.191 real 0m0.474s 00:07:00.191 user 0m0.312s 00:07:00.191 sys 0m0.216s 00:07:00.191 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.191 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:00.191 ************************************ 00:07:00.191 END TEST dd_sparse_file_to_bdev 00:07:00.191 ************************************ 00:07:00.191 10:28:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:00.191 10:28:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:00.191 10:28:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.191 10:28:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:00.450 ************************************ 00:07:00.450 START TEST dd_sparse_bdev_to_file 00:07:00.450 ************************************ 00:07:00.450 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1127 -- # bdev_to_file 00:07:00.450 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:00.450 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:00.450 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:00.450 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:00.450 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:00.450 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:00.450 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:00.450 10:28:48 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:00.450 [2024-11-12 10:28:49.003591] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:00.450 [2024-11-12 10:28:49.003693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61216 ] 00:07:00.450 { 00:07:00.450 "subsystems": [ 00:07:00.450 { 00:07:00.450 "subsystem": "bdev", 00:07:00.450 "config": [ 00:07:00.450 { 00:07:00.450 "params": { 00:07:00.450 "block_size": 4096, 00:07:00.450 "filename": "dd_sparse_aio_disk", 00:07:00.450 "name": "dd_aio" 00:07:00.450 }, 00:07:00.450 "method": "bdev_aio_create" 00:07:00.450 }, 00:07:00.450 { 00:07:00.450 "method": "bdev_wait_for_examine" 00:07:00.450 } 00:07:00.450 ] 00:07:00.451 } 00:07:00.451 ] 00:07:00.451 } 00:07:00.451 [2024-11-12 10:28:49.150091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.451 [2024-11-12 10:28:49.178844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.451 [2024-11-12 10:28:49.206129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.713  [2024-11-12T10:28:49.471Z] Copying: 12/36 [MB] (average 1090 MBps) 00:07:00.713 00:07:00.713 10:28:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:00.713 10:28:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:00.713 10:28:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:00.713 10:28:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:00.713 10:28:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:00.713 10:28:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:00.713 10:28:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:00.713 10:28:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:00.713 10:28:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:00.713 10:28:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:00.713 00:07:00.713 real 0m0.487s 00:07:00.713 user 0m0.297s 00:07:00.713 sys 0m0.240s 00:07:00.713 10:28:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.713 ************************************ 00:07:00.713 END TEST dd_sparse_bdev_to_file 00:07:00.713 ************************************ 00:07:00.713 10:28:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:00.973 10:28:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:00.973 10:28:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:00.973 10:28:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:00.973 10:28:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:00.973 10:28:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:00.973 00:07:00.973 real 0m1.922s 00:07:00.973 user 0m1.118s 00:07:00.973 sys 0m0.941s 00:07:00.973 10:28:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.973 ************************************ 00:07:00.973 END TEST spdk_dd_sparse 00:07:00.973 ************************************ 00:07:00.973 10:28:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:00.973 10:28:49 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:00.973 10:28:49 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:00.973 10:28:49 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.973 10:28:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:00.973 ************************************ 00:07:00.973 START TEST spdk_dd_negative 00:07:00.973 ************************************ 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:00.973 * Looking for test storage... 00:07:00.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:00.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.973 --rc genhtml_branch_coverage=1 00:07:00.973 --rc genhtml_function_coverage=1 00:07:00.973 --rc genhtml_legend=1 00:07:00.973 --rc geninfo_all_blocks=1 00:07:00.973 --rc geninfo_unexecuted_blocks=1 00:07:00.973 00:07:00.973 ' 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:00.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.973 --rc genhtml_branch_coverage=1 00:07:00.973 --rc genhtml_function_coverage=1 00:07:00.973 --rc genhtml_legend=1 00:07:00.973 --rc geninfo_all_blocks=1 00:07:00.973 --rc geninfo_unexecuted_blocks=1 00:07:00.973 00:07:00.973 ' 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:00.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.973 --rc genhtml_branch_coverage=1 00:07:00.973 --rc genhtml_function_coverage=1 00:07:00.973 --rc genhtml_legend=1 00:07:00.973 --rc geninfo_all_blocks=1 00:07:00.973 --rc geninfo_unexecuted_blocks=1 00:07:00.973 00:07:00.973 ' 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:00.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.973 --rc genhtml_branch_coverage=1 00:07:00.973 --rc genhtml_function_coverage=1 00:07:00.973 --rc genhtml_legend=1 00:07:00.973 --rc geninfo_all_blocks=1 00:07:00.973 --rc geninfo_unexecuted_blocks=1 00:07:00.973 00:07:00.973 ' 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.973 10:28:49 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.974 10:28:49 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.974 10:28:49 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:00.974 10:28:49 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.234 ************************************ 00:07:01.234 START TEST dd_invalid_arguments 00:07:01.234 ************************************ 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1127 -- # invalid_arguments 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.234 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:01.234 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:01.234 00:07:01.234 CPU options: 00:07:01.234 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:01.234 (like [0,1,10]) 00:07:01.234 --lcores lcore to CPU mapping list. The list is in the format: 00:07:01.234 [<,lcores[@CPUs]>...] 00:07:01.234 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:01.234 Within the group, '-' is used for range separator, 00:07:01.234 ',' is used for single number separator. 00:07:01.234 '( )' can be omitted for single element group, 00:07:01.234 '@' can be omitted if cpus and lcores have the same value 00:07:01.234 --disable-cpumask-locks Disable CPU core lock files. 00:07:01.234 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:01.234 pollers in the app support interrupt mode) 00:07:01.234 -p, --main-core main (primary) core for DPDK 00:07:01.234 00:07:01.234 Configuration options: 00:07:01.234 -c, --config, --json JSON config file 00:07:01.234 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:01.234 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:01.234 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:01.234 --rpcs-allowed comma-separated list of permitted RPCS 00:07:01.234 --json-ignore-init-errors don't exit on invalid config entry 00:07:01.234 00:07:01.234 Memory options: 00:07:01.234 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:01.234 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:01.234 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:01.234 -R, --huge-unlink unlink huge files after initialization 00:07:01.234 -n, --mem-channels number of memory channels used for DPDK 00:07:01.234 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:01.234 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:01.234 --no-huge run without using hugepages 00:07:01.234 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:01.234 -i, --shm-id shared memory ID (optional) 00:07:01.234 -g, --single-file-segments force creating just one hugetlbfs file 00:07:01.234 00:07:01.234 PCI options: 00:07:01.234 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:01.234 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:01.234 -u, --no-pci disable PCI access 00:07:01.234 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:01.234 00:07:01.234 Log options: 00:07:01.234 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:01.234 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:01.234 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:01.234 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:01.234 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:01.234 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:01.234 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:01.234 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:01.234 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:01.234 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:01.234 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:01.234 --silence-noticelog disable notice level logging to stderr 00:07:01.234 00:07:01.234 Trace options: 00:07:01.234 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:01.234 setting 0 to disable trace (default 32768) 00:07:01.234 Tracepoints vary in size and can use more than one trace entry. 00:07:01.234 -e, --tpoint-group [:] 00:07:01.234 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:01.234 [2024-11-12 10:28:49.821509] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:01.234 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:01.234 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:01.234 bdev_raid, scheduler, all). 00:07:01.234 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:01.234 a tracepoint group. First tpoint inside a group can be enabled by 00:07:01.234 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:01.234 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:01.234 in /include/spdk_internal/trace_defs.h 00:07:01.234 00:07:01.234 Other options: 00:07:01.234 -h, --help show this usage 00:07:01.234 -v, --version print SPDK version 00:07:01.235 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:01.235 --env-context Opaque context for use of the env implementation 00:07:01.235 00:07:01.235 Application specific: 00:07:01.235 [--------- DD Options ---------] 00:07:01.235 --if Input file. Must specify either --if or --ib. 00:07:01.235 --ib Input bdev. Must specifier either --if or --ib 00:07:01.235 --of Output file. Must specify either --of or --ob. 00:07:01.235 --ob Output bdev. Must specify either --of or --ob. 00:07:01.235 --iflag Input file flags. 00:07:01.235 --oflag Output file flags. 00:07:01.235 --bs I/O unit size (default: 4096) 00:07:01.235 --qd Queue depth (default: 2) 00:07:01.235 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:01.235 --skip Skip this many I/O units at start of input. (default: 0) 00:07:01.235 --seek Skip this many I/O units at start of output. (default: 0) 00:07:01.235 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:01.235 --sparse Enable hole skipping in input target 00:07:01.235 Available iflag and oflag values: 00:07:01.235 append - append mode 00:07:01.235 direct - use direct I/O for data 00:07:01.235 directory - fail unless a directory 00:07:01.235 dsync - use synchronized I/O for data 00:07:01.235 noatime - do not update access time 00:07:01.235 noctty - do not assign controlling terminal from file 00:07:01.235 nofollow - do not follow symlinks 00:07:01.235 nonblock - use non-blocking I/O 00:07:01.235 sync - use synchronized I/O for data and metadata 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.235 00:07:01.235 real 0m0.094s 00:07:01.235 user 0m0.052s 00:07:01.235 sys 0m0.028s 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:01.235 ************************************ 00:07:01.235 END TEST dd_invalid_arguments 00:07:01.235 ************************************ 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.235 ************************************ 00:07:01.235 START TEST dd_double_input 00:07:01.235 ************************************ 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1127 -- # double_input 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:01.235 [2024-11-12 10:28:49.945674] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.235 00:07:01.235 real 0m0.076s 00:07:01.235 user 0m0.046s 00:07:01.235 sys 0m0.029s 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.235 10:28:49 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:01.235 ************************************ 00:07:01.235 END TEST dd_double_input 00:07:01.235 ************************************ 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.495 ************************************ 00:07:01.495 START TEST dd_double_output 00:07:01.495 ************************************ 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1127 -- # double_output 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:01.495 [2024-11-12 10:28:50.077588] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.495 00:07:01.495 real 0m0.078s 00:07:01.495 user 0m0.051s 00:07:01.495 sys 0m0.025s 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.495 ************************************ 00:07:01.495 END TEST dd_double_output 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:01.495 ************************************ 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.495 ************************************ 00:07:01.495 START TEST dd_no_input 00:07:01.495 ************************************ 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1127 -- # no_input 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:01.495 [2024-11-12 10:28:50.209362] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.495 00:07:01.495 real 0m0.079s 00:07:01.495 user 0m0.046s 00:07:01.495 sys 0m0.032s 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.495 10:28:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:01.495 ************************************ 00:07:01.495 END TEST dd_no_input 00:07:01.495 ************************************ 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.754 ************************************ 00:07:01.754 START TEST dd_no_output 00:07:01.754 ************************************ 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1127 -- # no_output 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.754 [2024-11-12 10:28:50.343063] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.754 00:07:01.754 real 0m0.077s 00:07:01.754 user 0m0.048s 00:07:01.754 sys 0m0.028s 00:07:01.754 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:01.755 ************************************ 00:07:01.755 END TEST dd_no_output 00:07:01.755 ************************************ 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.755 ************************************ 00:07:01.755 START TEST dd_wrong_blocksize 00:07:01.755 ************************************ 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1127 -- # wrong_blocksize 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:01.755 [2024-11-12 10:28:50.472433] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.755 00:07:01.755 real 0m0.076s 00:07:01.755 user 0m0.053s 00:07:01.755 sys 0m0.023s 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.755 10:28:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:01.755 ************************************ 00:07:01.755 END TEST dd_wrong_blocksize 00:07:01.755 ************************************ 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:02.014 ************************************ 00:07:02.014 START TEST dd_smaller_blocksize 00:07:02.014 ************************************ 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1127 -- # smaller_blocksize 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.014 10:28:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:02.014 [2024-11-12 10:28:50.604993] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:02.014 [2024-11-12 10:28:50.605101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61443 ] 00:07:02.014 [2024-11-12 10:28:50.754829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.273 [2024-11-12 10:28:50.792380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.273 [2024-11-12 10:28:50.824737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.532 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:02.791 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:02.791 [2024-11-12 10:28:51.396188] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:02.791 [2024-11-12 10:28:51.396255] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.791 [2024-11-12 10:28:51.468762] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:02.791 10:28:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:02.791 10:28:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.791 10:28:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:02.791 10:28:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.791 10:28:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:02.791 10:28:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.791 00:07:02.791 real 0m0.986s 00:07:02.791 user 0m0.352s 00:07:02.791 sys 0m0.526s 00:07:02.791 10:28:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.791 ************************************ 00:07:02.791 END TEST dd_smaller_blocksize 00:07:02.791 ************************************ 00:07:02.791 10:28:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:03.050 ************************************ 00:07:03.050 START TEST dd_invalid_count 00:07:03.050 ************************************ 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1127 -- # invalid_count 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:03.050 [2024-11-12 10:28:51.644973] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.050 00:07:03.050 real 0m0.079s 00:07:03.050 user 0m0.044s 00:07:03.050 sys 0m0.034s 00:07:03.050 ************************************ 00:07:03.050 END TEST dd_invalid_count 00:07:03.050 ************************************ 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:03.050 ************************************ 00:07:03.050 START TEST dd_invalid_oflag 00:07:03.050 ************************************ 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1127 -- # invalid_oflag 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:03.050 [2024-11-12 10:28:51.774047] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.050 00:07:03.050 real 0m0.076s 00:07:03.050 user 0m0.048s 00:07:03.050 sys 0m0.026s 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.050 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:03.050 ************************************ 00:07:03.050 END TEST dd_invalid_oflag 00:07:03.050 ************************************ 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:03.310 ************************************ 00:07:03.310 START TEST dd_invalid_iflag 00:07:03.310 ************************************ 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1127 -- # invalid_iflag 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:03.310 [2024-11-12 10:28:51.901755] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.310 00:07:03.310 real 0m0.075s 00:07:03.310 user 0m0.051s 00:07:03.310 sys 0m0.024s 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:03.310 ************************************ 00:07:03.310 END TEST dd_invalid_iflag 00:07:03.310 ************************************ 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:03.310 ************************************ 00:07:03.310 START TEST dd_unknown_flag 00:07:03.310 ************************************ 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1127 -- # unknown_flag 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.310 10:28:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:03.310 [2024-11-12 10:28:52.023070] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:03.310 [2024-11-12 10:28:52.023154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61540 ] 00:07:03.570 [2024-11-12 10:28:52.172963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.570 [2024-11-12 10:28:52.213511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.570 [2024-11-12 10:28:52.252438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.570 [2024-11-12 10:28:52.274581] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:03.570 [2024-11-12 10:28:52.274697] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.570 [2024-11-12 10:28:52.274751] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:03.570 [2024-11-12 10:28:52.274763] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.570 [2024-11-12 10:28:52.275014] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:03.570 [2024-11-12 10:28:52.275046] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.570 [2024-11-12 10:28:52.275105] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:03.570 [2024-11-12 10:28:52.275115] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:03.829 [2024-11-12 10:28:52.345382] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:03.829 10:28:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:03.829 10:28:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.829 10:28:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:03.829 10:28:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:03.829 10:28:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:03.829 10:28:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.829 00:07:03.829 real 0m0.443s 00:07:03.829 user 0m0.233s 00:07:03.829 sys 0m0.116s 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.830 ************************************ 00:07:03.830 END TEST dd_unknown_flag 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:03.830 ************************************ 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:03.830 ************************************ 00:07:03.830 START TEST dd_invalid_json 00:07:03.830 ************************************ 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1127 -- # invalid_json 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.830 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:03.830 [2024-11-12 10:28:52.516799] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:03.830 [2024-11-12 10:28:52.516876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61569 ] 00:07:04.090 [2024-11-12 10:28:52.660062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.090 [2024-11-12 10:28:52.698507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.090 [2024-11-12 10:28:52.698614] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:04.090 [2024-11-12 10:28:52.698633] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:04.090 [2024-11-12 10:28:52.698642] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.090 [2024-11-12 10:28:52.698681] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.090 00:07:04.090 real 0m0.308s 00:07:04.090 user 0m0.153s 00:07:04.090 sys 0m0.054s 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.090 ************************************ 00:07:04.090 END TEST dd_invalid_json 00:07:04.090 ************************************ 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.090 ************************************ 00:07:04.090 START TEST dd_invalid_seek 00:07:04.090 ************************************ 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1127 -- # invalid_seek 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.090 10:28:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:04.350 { 00:07:04.350 "subsystems": [ 00:07:04.350 { 00:07:04.350 "subsystem": "bdev", 00:07:04.350 "config": [ 00:07:04.350 { 00:07:04.350 "params": { 00:07:04.350 "block_size": 512, 00:07:04.350 "num_blocks": 512, 00:07:04.350 "name": "malloc0" 00:07:04.350 }, 00:07:04.350 "method": "bdev_malloc_create" 00:07:04.350 }, 00:07:04.350 { 00:07:04.350 "params": { 00:07:04.350 "block_size": 512, 00:07:04.350 "num_blocks": 512, 00:07:04.350 "name": "malloc1" 00:07:04.350 }, 00:07:04.350 "method": "bdev_malloc_create" 00:07:04.350 }, 00:07:04.350 { 00:07:04.350 "method": "bdev_wait_for_examine" 00:07:04.350 } 00:07:04.350 ] 00:07:04.350 } 00:07:04.350 ] 00:07:04.350 } 00:07:04.350 [2024-11-12 10:28:52.895155] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:04.350 [2024-11-12 10:28:52.895311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:07:04.350 [2024-11-12 10:28:53.048275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.350 [2024-11-12 10:28:53.083420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.610 [2024-11-12 10:28:53.116533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.610 [2024-11-12 10:28:53.163612] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:04.610 [2024-11-12 10:28:53.163697] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.610 [2024-11-12 10:28:53.231417] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.610 00:07:04.610 real 0m0.465s 00:07:04.610 user 0m0.305s 00:07:04.610 sys 0m0.121s 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:04.610 ************************************ 00:07:04.610 END TEST dd_invalid_seek 00:07:04.610 ************************************ 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.610 ************************************ 00:07:04.610 START TEST dd_invalid_skip 00:07:04.610 ************************************ 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1127 -- # invalid_skip 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.610 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:04.869 [2024-11-12 10:28:53.401266] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:04.869 [2024-11-12 10:28:53.401357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61632 ] 00:07:04.869 { 00:07:04.869 "subsystems": [ 00:07:04.869 { 00:07:04.869 "subsystem": "bdev", 00:07:04.869 "config": [ 00:07:04.869 { 00:07:04.869 "params": { 00:07:04.869 "block_size": 512, 00:07:04.869 "num_blocks": 512, 00:07:04.869 "name": "malloc0" 00:07:04.869 }, 00:07:04.869 "method": "bdev_malloc_create" 00:07:04.869 }, 00:07:04.869 { 00:07:04.869 "params": { 00:07:04.869 "block_size": 512, 00:07:04.869 "num_blocks": 512, 00:07:04.869 "name": "malloc1" 00:07:04.869 }, 00:07:04.869 "method": "bdev_malloc_create" 00:07:04.869 }, 00:07:04.869 { 00:07:04.869 "method": "bdev_wait_for_examine" 00:07:04.869 } 00:07:04.869 ] 00:07:04.869 } 00:07:04.869 ] 00:07:04.869 } 00:07:04.869 [2024-11-12 10:28:53.549958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.869 [2024-11-12 10:28:53.589861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.129 [2024-11-12 10:28:53.628330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.129 [2024-11-12 10:28:53.677092] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:05.129 [2024-11-12 10:28:53.677154] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.129 [2024-11-12 10:28:53.744753] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.129 00:07:05.129 real 0m0.468s 00:07:05.129 user 0m0.301s 00:07:05.129 sys 0m0.126s 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:05.129 ************************************ 00:07:05.129 END TEST dd_invalid_skip 00:07:05.129 ************************************ 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.129 ************************************ 00:07:05.129 START TEST dd_invalid_input_count 00:07:05.129 ************************************ 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1127 -- # invalid_input_count 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:05.129 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:05.130 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.130 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:05.130 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:05.130 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.130 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.130 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.130 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.130 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.130 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.130 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.130 10:28:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:05.389 { 00:07:05.389 "subsystems": [ 00:07:05.389 { 00:07:05.389 "subsystem": "bdev", 00:07:05.389 "config": [ 00:07:05.389 { 00:07:05.389 "params": { 00:07:05.389 "block_size": 512, 00:07:05.389 "num_blocks": 512, 00:07:05.389 "name": "malloc0" 00:07:05.389 }, 00:07:05.389 "method": "bdev_malloc_create" 00:07:05.389 }, 00:07:05.389 { 00:07:05.389 "params": { 00:07:05.389 "block_size": 512, 00:07:05.389 "num_blocks": 512, 00:07:05.389 "name": "malloc1" 00:07:05.389 }, 00:07:05.389 "method": "bdev_malloc_create" 00:07:05.389 }, 00:07:05.389 { 00:07:05.389 "method": "bdev_wait_for_examine" 00:07:05.389 } 00:07:05.389 ] 00:07:05.389 } 00:07:05.389 ] 00:07:05.389 } 00:07:05.389 [2024-11-12 10:28:53.926144] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:05.389 [2024-11-12 10:28:53.926247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61660 ] 00:07:05.389 [2024-11-12 10:28:54.073535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.389 [2024-11-12 10:28:54.108329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.389 [2024-11-12 10:28:54.142734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.649 [2024-11-12 10:28:54.190407] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:05.649 [2024-11-12 10:28:54.190474] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.649 [2024-11-12 10:28:54.259226] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.649 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:07:05.649 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.649 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:07:05.649 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:07:05.649 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:07:05.649 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.649 00:07:05.649 real 0m0.458s 00:07:05.649 user 0m0.305s 00:07:05.649 sys 0m0.114s 00:07:05.649 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:05.650 ************************************ 00:07:05.650 END TEST dd_invalid_input_count 00:07:05.650 ************************************ 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.650 ************************************ 00:07:05.650 START TEST dd_invalid_output_count 00:07:05.650 ************************************ 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1127 -- # invalid_output_count 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.650 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:05.909 [2024-11-12 10:28:54.427436] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:05.909 [2024-11-12 10:28:54.427523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61699 ] 00:07:05.909 { 00:07:05.909 "subsystems": [ 00:07:05.909 { 00:07:05.909 "subsystem": "bdev", 00:07:05.909 "config": [ 00:07:05.909 { 00:07:05.909 "params": { 00:07:05.909 "block_size": 512, 00:07:05.909 "num_blocks": 512, 00:07:05.909 "name": "malloc0" 00:07:05.909 }, 00:07:05.909 "method": "bdev_malloc_create" 00:07:05.909 }, 00:07:05.909 { 00:07:05.909 "method": "bdev_wait_for_examine" 00:07:05.909 } 00:07:05.909 ] 00:07:05.909 } 00:07:05.909 ] 00:07:05.909 } 00:07:05.909 [2024-11-12 10:28:54.571249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.909 [2024-11-12 10:28:54.608996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.909 [2024-11-12 10:28:54.642248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.170 [2024-11-12 10:28:54.680217] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:06.170 [2024-11-12 10:28:54.680334] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.170 [2024-11-12 10:28:54.749477] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.170 00:07:06.170 real 0m0.439s 00:07:06.170 user 0m0.286s 00:07:06.170 sys 0m0.108s 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:06.170 ************************************ 00:07:06.170 END TEST dd_invalid_output_count 00:07:06.170 ************************************ 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.170 ************************************ 00:07:06.170 START TEST dd_bs_not_multiple 00:07:06.170 ************************************ 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1127 -- # bs_not_multiple 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.170 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:06.171 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:06.171 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.171 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.171 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.171 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.171 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.171 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.171 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.171 10:28:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:06.171 [2024-11-12 10:28:54.927001] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:06.171 [2024-11-12 10:28:54.927140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61725 ] 00:07:06.171 { 00:07:06.171 "subsystems": [ 00:07:06.171 { 00:07:06.171 "subsystem": "bdev", 00:07:06.171 "config": [ 00:07:06.171 { 00:07:06.171 "params": { 00:07:06.171 "block_size": 512, 00:07:06.171 "num_blocks": 512, 00:07:06.171 "name": "malloc0" 00:07:06.171 }, 00:07:06.171 "method": "bdev_malloc_create" 00:07:06.171 }, 00:07:06.171 { 00:07:06.171 "params": { 00:07:06.171 "block_size": 512, 00:07:06.171 "num_blocks": 512, 00:07:06.171 "name": "malloc1" 00:07:06.171 }, 00:07:06.171 "method": "bdev_malloc_create" 00:07:06.171 }, 00:07:06.171 { 00:07:06.171 "method": "bdev_wait_for_examine" 00:07:06.171 } 00:07:06.171 ] 00:07:06.171 } 00:07:06.171 ] 00:07:06.171 } 00:07:06.430 [2024-11-12 10:28:55.074077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.430 [2024-11-12 10:28:55.110696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.430 [2024-11-12 10:28:55.142458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.689 [2024-11-12 10:28:55.190543] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:06.689 [2024-11-12 10:28:55.190609] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.689 [2024-11-12 10:28:55.259213] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:06.689 10:28:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:07:06.689 10:28:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.689 10:28:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:07:06.689 10:28:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:07:06.689 10:28:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:07:06.689 10:28:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.689 00:07:06.689 real 0m0.454s 00:07:06.689 user 0m0.302s 00:07:06.689 sys 0m0.112s 00:07:06.689 10:28:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.689 10:28:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:06.689 ************************************ 00:07:06.689 END TEST dd_bs_not_multiple 00:07:06.689 ************************************ 00:07:06.689 00:07:06.689 real 0m5.810s 00:07:06.689 user 0m3.075s 00:07:06.689 sys 0m2.136s 00:07:06.689 10:28:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.689 10:28:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.689 ************************************ 00:07:06.689 END TEST spdk_dd_negative 00:07:06.689 ************************************ 00:07:06.689 00:07:06.689 real 1m4.051s 00:07:06.689 user 0m40.716s 00:07:06.689 sys 0m27.034s 00:07:06.689 10:28:55 spdk_dd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.689 10:28:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:06.689 ************************************ 00:07:06.689 END TEST spdk_dd 00:07:06.689 ************************************ 00:07:06.689 10:28:55 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:06.689 10:28:55 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:06.689 10:28:55 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:06.689 10:28:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.689 10:28:55 -- common/autotest_common.sh@10 -- # set +x 00:07:06.949 10:28:55 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:06.949 10:28:55 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:06.949 10:28:55 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:06.949 10:28:55 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:06.949 10:28:55 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:06.949 10:28:55 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:06.949 10:28:55 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:06.949 10:28:55 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:06.949 10:28:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.949 10:28:55 -- common/autotest_common.sh@10 -- # set +x 00:07:06.949 ************************************ 00:07:06.949 START TEST nvmf_tcp 00:07:06.949 ************************************ 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:06.949 * Looking for test storage... 00:07:06.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.949 10:28:55 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:06.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.949 --rc genhtml_branch_coverage=1 00:07:06.949 --rc genhtml_function_coverage=1 00:07:06.949 --rc genhtml_legend=1 00:07:06.949 --rc geninfo_all_blocks=1 00:07:06.949 --rc geninfo_unexecuted_blocks=1 00:07:06.949 00:07:06.949 ' 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:06.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.949 --rc genhtml_branch_coverage=1 00:07:06.949 --rc genhtml_function_coverage=1 00:07:06.949 --rc genhtml_legend=1 00:07:06.949 --rc geninfo_all_blocks=1 00:07:06.949 --rc geninfo_unexecuted_blocks=1 00:07:06.949 00:07:06.949 ' 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:06.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.949 --rc genhtml_branch_coverage=1 00:07:06.949 --rc genhtml_function_coverage=1 00:07:06.949 --rc genhtml_legend=1 00:07:06.949 --rc geninfo_all_blocks=1 00:07:06.949 --rc geninfo_unexecuted_blocks=1 00:07:06.949 00:07:06.949 ' 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:06.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.949 --rc genhtml_branch_coverage=1 00:07:06.949 --rc genhtml_function_coverage=1 00:07:06.949 --rc genhtml_legend=1 00:07:06.949 --rc geninfo_all_blocks=1 00:07:06.949 --rc geninfo_unexecuted_blocks=1 00:07:06.949 00:07:06.949 ' 00:07:06.949 10:28:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:06.949 10:28:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:06.949 10:28:55 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.949 10:28:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.949 ************************************ 00:07:06.949 START TEST nvmf_target_core 00:07:06.949 ************************************ 00:07:06.949 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:07.209 * Looking for test storage... 00:07:07.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:07.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.209 --rc genhtml_branch_coverage=1 00:07:07.209 --rc genhtml_function_coverage=1 00:07:07.209 --rc genhtml_legend=1 00:07:07.209 --rc geninfo_all_blocks=1 00:07:07.209 --rc geninfo_unexecuted_blocks=1 00:07:07.209 00:07:07.209 ' 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:07.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.209 --rc genhtml_branch_coverage=1 00:07:07.209 --rc genhtml_function_coverage=1 00:07:07.209 --rc genhtml_legend=1 00:07:07.209 --rc geninfo_all_blocks=1 00:07:07.209 --rc geninfo_unexecuted_blocks=1 00:07:07.209 00:07:07.209 ' 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:07.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.209 --rc genhtml_branch_coverage=1 00:07:07.209 --rc genhtml_function_coverage=1 00:07:07.209 --rc genhtml_legend=1 00:07:07.209 --rc geninfo_all_blocks=1 00:07:07.209 --rc geninfo_unexecuted_blocks=1 00:07:07.209 00:07:07.209 ' 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:07.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.209 --rc genhtml_branch_coverage=1 00:07:07.209 --rc genhtml_function_coverage=1 00:07:07.209 --rc genhtml_legend=1 00:07:07.209 --rc geninfo_all_blocks=1 00:07:07.209 --rc geninfo_unexecuted_blocks=1 00:07:07.209 00:07:07.209 ' 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:07.209 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.210 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.210 ************************************ 00:07:07.210 START TEST nvmf_host_management 00:07:07.210 ************************************ 00:07:07.210 10:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:07.469 * Looking for test storage... 00:07:07.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:07.469 10:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:07.469 10:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:07.469 10:28:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:07.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.470 --rc genhtml_branch_coverage=1 00:07:07.470 --rc genhtml_function_coverage=1 00:07:07.470 --rc genhtml_legend=1 00:07:07.470 --rc geninfo_all_blocks=1 00:07:07.470 --rc geninfo_unexecuted_blocks=1 00:07:07.470 00:07:07.470 ' 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:07.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.470 --rc genhtml_branch_coverage=1 00:07:07.470 --rc genhtml_function_coverage=1 00:07:07.470 --rc genhtml_legend=1 00:07:07.470 --rc geninfo_all_blocks=1 00:07:07.470 --rc geninfo_unexecuted_blocks=1 00:07:07.470 00:07:07.470 ' 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:07.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.470 --rc genhtml_branch_coverage=1 00:07:07.470 --rc genhtml_function_coverage=1 00:07:07.470 --rc genhtml_legend=1 00:07:07.470 --rc geninfo_all_blocks=1 00:07:07.470 --rc geninfo_unexecuted_blocks=1 00:07:07.470 00:07:07.470 ' 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:07.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.470 --rc genhtml_branch_coverage=1 00:07:07.470 --rc genhtml_function_coverage=1 00:07:07.470 --rc genhtml_legend=1 00:07:07.470 --rc geninfo_all_blocks=1 00:07:07.470 --rc geninfo_unexecuted_blocks=1 00:07:07.470 00:07:07.470 ' 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.470 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:07.470 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:07.471 Cannot find device "nvmf_init_br" 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:07.471 Cannot find device "nvmf_init_br2" 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:07.471 Cannot find device "nvmf_tgt_br" 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:07.471 Cannot find device "nvmf_tgt_br2" 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:07.471 Cannot find device "nvmf_init_br" 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:07.471 Cannot find device "nvmf_init_br2" 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:07.471 Cannot find device "nvmf_tgt_br" 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:07.471 Cannot find device "nvmf_tgt_br2" 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:07.471 Cannot find device "nvmf_br" 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:07.471 Cannot find device "nvmf_init_if" 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:07.471 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:07.471 Cannot find device "nvmf_init_if2" 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:07.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:07.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:07.730 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:07.990 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:07.990 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:07:07.990 00:07:07.990 --- 10.0.0.3 ping statistics --- 00:07:07.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.990 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:07.990 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:07.990 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:07:07.990 00:07:07.990 --- 10.0.0.4 ping statistics --- 00:07:07.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.990 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:07.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:07.990 00:07:07.990 --- 10.0.0.1 ping statistics --- 00:07:07.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.990 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:07.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:07:07.990 00:07:07.990 --- 10.0.0.2 ping statistics --- 00:07:07.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.990 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:07.990 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62067 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62067 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62067 ']' 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:07.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:07.991 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.991 [2024-11-12 10:28:56.675547] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:07.991 [2024-11-12 10:28:56.675630] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.282 [2024-11-12 10:28:56.831634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.282 [2024-11-12 10:28:56.876984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.282 [2024-11-12 10:28:56.877071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.282 [2024-11-12 10:28:56.877086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.282 [2024-11-12 10:28:56.877096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.283 [2024-11-12 10:28:56.877115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.283 [2024-11-12 10:28:56.878530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.283 [2024-11-12 10:28:56.878696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.283 [2024-11-12 10:28:56.879371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.283 [2024-11-12 10:28:56.879382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.283 [2024-11-12 10:28:56.915110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.283 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.283 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:08.283 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:08.283 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:08.283 10:28:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.283 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.283 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:08.283 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.283 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.283 [2024-11-12 10:28:57.011685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.561 Malloc0 00:07:08.561 [2024-11-12 10:28:57.078755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62108 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62108 /var/tmp/bdevperf.sock 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62108 ']' 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:08.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:08.561 { 00:07:08.561 "params": { 00:07:08.561 "name": "Nvme$subsystem", 00:07:08.561 "trtype": "$TEST_TRANSPORT", 00:07:08.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:08.561 "adrfam": "ipv4", 00:07:08.561 "trsvcid": "$NVMF_PORT", 00:07:08.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:08.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:08.561 "hdgst": ${hdgst:-false}, 00:07:08.561 "ddgst": ${ddgst:-false} 00:07:08.561 }, 00:07:08.561 "method": "bdev_nvme_attach_controller" 00:07:08.561 } 00:07:08.561 EOF 00:07:08.561 )") 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:08.561 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:08.561 "params": { 00:07:08.561 "name": "Nvme0", 00:07:08.561 "trtype": "tcp", 00:07:08.561 "traddr": "10.0.0.3", 00:07:08.561 "adrfam": "ipv4", 00:07:08.561 "trsvcid": "4420", 00:07:08.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:08.561 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:08.561 "hdgst": false, 00:07:08.561 "ddgst": false 00:07:08.561 }, 00:07:08.561 "method": "bdev_nvme_attach_controller" 00:07:08.562 }' 00:07:08.562 [2024-11-12 10:28:57.183464] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:08.562 [2024-11-12 10:28:57.183550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62108 ] 00:07:08.820 [2024-11-12 10:28:57.394646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.820 [2024-11-12 10:28:57.437770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.820 [2024-11-12 10:28:57.481960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.079 Running I/O for 10 seconds... 00:07:09.650 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:09.650 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:09.650 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.651 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:09.651 [2024-11-12 10:28:58.369414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.369985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.369995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.370007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.370021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.370033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.370042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.370055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.370065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.370083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.370092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.370103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.370112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.370124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.370133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.370144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.370153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.370165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.370174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.370206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.370216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.370228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.370237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.370249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.370260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.651 [2024-11-12 10:28:58.370272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.651 [2024-11-12 10:28:58.370281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.370972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.370990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.371007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.371023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.371033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.371044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.371054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.371066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.371075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.371086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.371096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.371110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.371127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.371146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.371163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.371199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.371221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.371239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.371249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.371261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.652 [2024-11-12 10:28:58.371273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.371284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d272d0 is same with the state(6) to be set 00:07:09.652 [2024-11-12 10:28:58.371483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:09.652 [2024-11-12 10:28:58.371521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.652 [2024-11-12 10:28:58.371542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:09.652 [2024-11-12 10:28:58.371552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.653 [2024-11-12 10:28:58.371562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:09.653 [2024-11-12 10:28:58.371571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.653 [2024-11-12 10:28:58.371582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:09.653 [2024-11-12 10:28:58.371591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:09.653 [2024-11-12 10:28:58.371600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2cce0 is same with the state(6) to be set 00:07:09.653 [2024-11-12 10:28:58.372737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:09.653 task offset: 8192 on job bdev=Nvme0n1 fails 00:07:09.653 00:07:09.653 Latency(us) 00:07:09.653 [2024-11-12T10:28:58.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.653 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:09.653 Job: Nvme0n1 ended in about 0.78 seconds with error 00:07:09.653 Verification LBA range: start 0x0 length 0x400 00:07:09.653 Nvme0n1 : 0.78 1398.11 87.38 82.24 0.00 42101.14 2651.23 45279.42 00:07:09.653 [2024-11-12T10:28:58.411Z] =================================================================================================================== 00:07:09.653 [2024-11-12T10:28:58.411Z] Total : 1398.11 87.38 82.24 0.00 42101.14 2651.23 45279.42 00:07:09.653 [2024-11-12 10:28:58.374980] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.653 [2024-11-12 10:28:58.375125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2cce0 (9): Bad file descriptor 00:07:09.653 [2024-11-12 10:28:58.381220] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62108 00:07:11.030 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62108) - No such process 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:11.030 { 00:07:11.030 "params": { 00:07:11.030 "name": "Nvme$subsystem", 00:07:11.030 "trtype": "$TEST_TRANSPORT", 00:07:11.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:11.030 "adrfam": "ipv4", 00:07:11.030 "trsvcid": "$NVMF_PORT", 00:07:11.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:11.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:11.030 "hdgst": ${hdgst:-false}, 00:07:11.030 "ddgst": ${ddgst:-false} 00:07:11.030 }, 00:07:11.030 "method": "bdev_nvme_attach_controller" 00:07:11.030 } 00:07:11.030 EOF 00:07:11.030 )") 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:11.030 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:11.030 "params": { 00:07:11.030 "name": "Nvme0", 00:07:11.030 "trtype": "tcp", 00:07:11.030 "traddr": "10.0.0.3", 00:07:11.030 "adrfam": "ipv4", 00:07:11.030 "trsvcid": "4420", 00:07:11.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:11.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:11.030 "hdgst": false, 00:07:11.030 "ddgst": false 00:07:11.030 }, 00:07:11.030 "method": "bdev_nvme_attach_controller" 00:07:11.030 }' 00:07:11.030 [2024-11-12 10:28:59.424974] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:11.030 [2024-11-12 10:28:59.425074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62157 ] 00:07:11.030 [2024-11-12 10:28:59.579380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.030 [2024-11-12 10:28:59.619709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.030 [2024-11-12 10:28:59.662341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.030 Running I/O for 1 seconds... 00:07:12.407 1472.00 IOPS, 92.00 MiB/s 00:07:12.407 Latency(us) 00:07:12.407 [2024-11-12T10:29:01.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.407 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:12.407 Verification LBA range: start 0x0 length 0x400 00:07:12.407 Nvme0n1 : 1.02 1506.68 94.17 0.00 0.00 41535.36 3619.37 44087.85 00:07:12.407 [2024-11-12T10:29:01.165Z] =================================================================================================================== 00:07:12.407 [2024-11-12T10:29:01.165Z] Total : 1506.68 94.17 0.00 0.00 41535.36 3619.37 44087.85 00:07:12.407 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:12.407 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:12.407 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:12.407 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:12.407 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:12.407 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:12.407 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:12.407 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:12.407 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:12.407 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:12.407 10:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:12.407 rmmod nvme_tcp 00:07:12.407 rmmod nvme_fabrics 00:07:12.407 rmmod nvme_keyring 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62067 ']' 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62067 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 62067 ']' 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 62067 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62067 00:07:12.407 killing process with pid 62067 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62067' 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 62067 00:07:12.407 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 62067 00:07:12.667 [2024-11-12 10:29:01.172856] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.667 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:12.927 ************************************ 00:07:12.927 END TEST nvmf_host_management 00:07:12.927 ************************************ 00:07:12.927 00:07:12.927 real 0m5.539s 00:07:12.927 user 0m20.003s 00:07:12.927 sys 0m1.521s 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.927 ************************************ 00:07:12.927 START TEST nvmf_lvol 00:07:12.927 ************************************ 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:12.927 * Looking for test storage... 00:07:12.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.927 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:12.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.928 --rc genhtml_branch_coverage=1 00:07:12.928 --rc genhtml_function_coverage=1 00:07:12.928 --rc genhtml_legend=1 00:07:12.928 --rc geninfo_all_blocks=1 00:07:12.928 --rc geninfo_unexecuted_blocks=1 00:07:12.928 00:07:12.928 ' 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:12.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.928 --rc genhtml_branch_coverage=1 00:07:12.928 --rc genhtml_function_coverage=1 00:07:12.928 --rc genhtml_legend=1 00:07:12.928 --rc geninfo_all_blocks=1 00:07:12.928 --rc geninfo_unexecuted_blocks=1 00:07:12.928 00:07:12.928 ' 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:12.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.928 --rc genhtml_branch_coverage=1 00:07:12.928 --rc genhtml_function_coverage=1 00:07:12.928 --rc genhtml_legend=1 00:07:12.928 --rc geninfo_all_blocks=1 00:07:12.928 --rc geninfo_unexecuted_blocks=1 00:07:12.928 00:07:12.928 ' 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:12.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.928 --rc genhtml_branch_coverage=1 00:07:12.928 --rc genhtml_function_coverage=1 00:07:12.928 --rc genhtml_legend=1 00:07:12.928 --rc geninfo_all_blocks=1 00:07:12.928 --rc geninfo_unexecuted_blocks=1 00:07:12.928 00:07:12.928 ' 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.928 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.928 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:13.188 Cannot find device "nvmf_init_br" 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:13.188 Cannot find device "nvmf_init_br2" 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:13.188 Cannot find device "nvmf_tgt_br" 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:13.188 Cannot find device "nvmf_tgt_br2" 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:13.188 Cannot find device "nvmf_init_br" 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:13.188 Cannot find device "nvmf_init_br2" 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:13.188 Cannot find device "nvmf_tgt_br" 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:13.188 Cannot find device "nvmf_tgt_br2" 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:13.188 Cannot find device "nvmf_br" 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:13.188 Cannot find device "nvmf_init_if" 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:13.188 Cannot find device "nvmf_init_if2" 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:13.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:13.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:13.188 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:13.448 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:13.448 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:13.448 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:13.448 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:13.448 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:13.448 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:13.448 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:13.448 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:07:13.448 00:07:13.448 --- 10.0.0.3 ping statistics --- 00:07:13.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.448 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:13.448 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:13.448 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:07:13.448 00:07:13.448 --- 10.0.0.4 ping statistics --- 00:07:13.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.448 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:13.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:07:13.448 00:07:13.448 --- 10.0.0.1 ping statistics --- 00:07:13.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.448 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:13.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:07:13.448 00:07:13.448 --- 10.0.0.2 ping statistics --- 00:07:13.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.448 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62415 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62415 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 62415 ']' 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:13.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.448 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:13.449 10:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.449 [2024-11-12 10:29:02.155258] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:13.449 [2024-11-12 10:29:02.155587] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.708 [2024-11-12 10:29:02.307392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.708 [2024-11-12 10:29:02.349253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.708 [2024-11-12 10:29:02.349309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.708 [2024-11-12 10:29:02.349334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.708 [2024-11-12 10:29:02.349344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.708 [2024-11-12 10:29:02.349353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.708 [2024-11-12 10:29:02.350283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.708 [2024-11-12 10:29:02.350382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.708 [2024-11-12 10:29:02.350390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.708 [2024-11-12 10:29:02.383533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.643 10:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:14.643 10:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:14.643 10:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:14.643 10:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:14.643 10:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.643 10:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.643 10:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:14.901 [2024-11-12 10:29:03.410347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.901 10:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:15.159 10:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:15.159 10:29:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:15.417 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:15.417 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:15.676 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:15.934 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=858e3142-a78f-429a-a214-5fde16b1c51f 00:07:15.934 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 858e3142-a78f-429a-a214-5fde16b1c51f lvol 20 00:07:16.193 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=085abd40-62c8-4053-864f-d8d79a99cbb0 00:07:16.193 10:29:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:16.451 10:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 085abd40-62c8-4053-864f-d8d79a99cbb0 00:07:16.709 10:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:16.968 [2024-11-12 10:29:05.497341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:16.968 10:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:17.227 10:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62496 00:07:17.227 10:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:17.227 10:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:18.162 10:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 085abd40-62c8-4053-864f-d8d79a99cbb0 MY_SNAPSHOT 00:07:18.420 10:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a32996e8-c55c-4e0c-99f3-b270f7ae4d03 00:07:18.421 10:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 085abd40-62c8-4053-864f-d8d79a99cbb0 30 00:07:18.987 10:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone a32996e8-c55c-4e0c-99f3-b270f7ae4d03 MY_CLONE 00:07:19.245 10:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e3878754-e6ac-44b7-b93c-6035018a4a77 00:07:19.245 10:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e3878754-e6ac-44b7-b93c-6035018a4a77 00:07:19.811 10:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62496 00:07:27.928 Initializing NVMe Controllers 00:07:27.928 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:27.928 Controller IO queue size 128, less than required. 00:07:27.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:27.928 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:27.928 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:27.928 Initialization complete. Launching workers. 00:07:27.928 ======================================================== 00:07:27.928 Latency(us) 00:07:27.928 Device Information : IOPS MiB/s Average min max 00:07:27.928 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10838.10 42.34 11810.48 2175.79 48256.88 00:07:27.928 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10894.60 42.56 11749.72 1708.88 72062.38 00:07:27.928 ======================================================== 00:07:27.928 Total : 21732.70 84.89 11780.03 1708.88 72062.38 00:07:27.928 00:07:27.928 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:27.928 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 085abd40-62c8-4053-864f-d8d79a99cbb0 00:07:27.928 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 858e3142-a78f-429a-a214-5fde16b1c51f 00:07:28.217 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:28.217 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:28.217 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:28.218 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:28.218 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:28.218 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.218 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:28.218 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.218 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.218 rmmod nvme_tcp 00:07:28.218 rmmod nvme_fabrics 00:07:28.218 rmmod nvme_keyring 00:07:28.521 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.521 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:28.521 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:28.521 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62415 ']' 00:07:28.521 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62415 00:07:28.521 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 62415 ']' 00:07:28.521 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 62415 00:07:28.521 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:28.521 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:28.521 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62415 00:07:28.521 killing process with pid 62415 00:07:28.521 10:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62415' 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 62415 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 62415 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:28.521 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:28.522 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:28.780 ************************************ 00:07:28.780 END TEST nvmf_lvol 00:07:28.780 ************************************ 00:07:28.780 00:07:28.780 real 0m15.915s 00:07:28.780 user 1m5.537s 00:07:28.780 sys 0m4.240s 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:28.780 ************************************ 00:07:28.780 START TEST nvmf_lvs_grow 00:07:28.780 ************************************ 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:28.780 * Looking for test storage... 00:07:28.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:28.780 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:29.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.041 --rc genhtml_branch_coverage=1 00:07:29.041 --rc genhtml_function_coverage=1 00:07:29.041 --rc genhtml_legend=1 00:07:29.041 --rc geninfo_all_blocks=1 00:07:29.041 --rc geninfo_unexecuted_blocks=1 00:07:29.041 00:07:29.041 ' 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:29.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.041 --rc genhtml_branch_coverage=1 00:07:29.041 --rc genhtml_function_coverage=1 00:07:29.041 --rc genhtml_legend=1 00:07:29.041 --rc geninfo_all_blocks=1 00:07:29.041 --rc geninfo_unexecuted_blocks=1 00:07:29.041 00:07:29.041 ' 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:29.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.041 --rc genhtml_branch_coverage=1 00:07:29.041 --rc genhtml_function_coverage=1 00:07:29.041 --rc genhtml_legend=1 00:07:29.041 --rc geninfo_all_blocks=1 00:07:29.041 --rc geninfo_unexecuted_blocks=1 00:07:29.041 00:07:29.041 ' 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:29.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.041 --rc genhtml_branch_coverage=1 00:07:29.041 --rc genhtml_function_coverage=1 00:07:29.041 --rc genhtml_legend=1 00:07:29.041 --rc geninfo_all_blocks=1 00:07:29.041 --rc geninfo_unexecuted_blocks=1 00:07:29.041 00:07:29.041 ' 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.041 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.042 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:29.042 Cannot find device "nvmf_init_br" 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:29.042 Cannot find device "nvmf_init_br2" 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:29.042 Cannot find device "nvmf_tgt_br" 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.042 Cannot find device "nvmf_tgt_br2" 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:29.042 Cannot find device "nvmf_init_br" 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:29.042 Cannot find device "nvmf_init_br2" 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:29.042 Cannot find device "nvmf_tgt_br" 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:29.042 Cannot find device "nvmf_tgt_br2" 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:29.042 Cannot find device "nvmf_br" 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:29.042 Cannot find device "nvmf_init_if" 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:29.042 Cannot find device "nvmf_init_if2" 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:29.042 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:29.301 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:29.301 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:07:29.301 00:07:29.301 --- 10.0.0.3 ping statistics --- 00:07:29.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.301 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:29.301 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:29.301 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:07:29.301 00:07:29.301 --- 10.0.0.4 ping statistics --- 00:07:29.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.301 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:29.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:29.301 00:07:29.301 --- 10.0.0.1 ping statistics --- 00:07:29.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.301 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:29.301 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:29.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:07:29.301 00:07:29.301 --- 10.0.0.2 ping statistics --- 00:07:29.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.302 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:07:29.302 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.302 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:29.302 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:29.302 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.302 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:29.302 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:29.302 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.302 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:29.302 10:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=62872 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 62872 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 62872 ']' 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:29.302 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.562 [2024-11-12 10:29:18.093782] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:29.562 [2024-11-12 10:29:18.094115] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.562 [2024-11-12 10:29:18.239970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.562 [2024-11-12 10:29:18.270881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.562 [2024-11-12 10:29:18.271347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.562 [2024-11-12 10:29:18.271436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.562 [2024-11-12 10:29:18.271527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.562 [2024-11-12 10:29:18.271602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.562 [2024-11-12 10:29:18.271954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.562 [2024-11-12 10:29:18.301082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.821 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:29.821 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:29.821 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:29.821 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:29.821 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.821 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.821 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:30.079 [2024-11-12 10:29:18.699430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.079 ************************************ 00:07:30.079 START TEST lvs_grow_clean 00:07:30.079 ************************************ 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:30.079 10:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:30.338 10:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:30.338 10:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:30.905 10:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b145c318-838e-4f4d-8b16-d3fdfe7c98ea 00:07:30.905 10:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b145c318-838e-4f4d-8b16-d3fdfe7c98ea 00:07:30.905 10:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:31.164 10:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:31.164 10:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:31.164 10:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b145c318-838e-4f4d-8b16-d3fdfe7c98ea lvol 150 00:07:31.164 10:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7bed7722-9f83-4e49-8cf0-92c6ea953a7c 00:07:31.164 10:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:31.164 10:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:31.423 [2024-11-12 10:29:20.135868] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:31.423 [2024-11-12 10:29:20.136392] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:31.423 true 00:07:31.423 10:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b145c318-838e-4f4d-8b16-d3fdfe7c98ea 00:07:31.423 10:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:31.990 10:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:31.990 10:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.990 10:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7bed7722-9f83-4e49-8cf0-92c6ea953a7c 00:07:32.249 10:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:32.508 [2024-11-12 10:29:21.168464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:32.508 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=62952 00:07:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 62952 /var/tmp/bdevperf.sock 00:07:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 62952 ']' 00:07:32.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.767 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:33.026 [2024-11-12 10:29:21.531938] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:33.026 [2024-11-12 10:29:21.532026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62952 ] 00:07:33.026 [2024-11-12 10:29:21.669125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.026 [2024-11-12 10:29:21.698155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.026 [2024-11-12 10:29:21.725549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:33.026 10:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:33.594 Nvme0n1 00:07:33.594 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:33.594 [ 00:07:33.594 { 00:07:33.594 "name": "Nvme0n1", 00:07:33.594 "aliases": [ 00:07:33.594 "7bed7722-9f83-4e49-8cf0-92c6ea953a7c" 00:07:33.594 ], 00:07:33.594 "product_name": "NVMe disk", 00:07:33.594 "block_size": 4096, 00:07:33.594 "num_blocks": 38912, 00:07:33.594 "uuid": "7bed7722-9f83-4e49-8cf0-92c6ea953a7c", 00:07:33.594 "numa_id": -1, 00:07:33.594 "assigned_rate_limits": { 00:07:33.594 "rw_ios_per_sec": 0, 00:07:33.594 "rw_mbytes_per_sec": 0, 00:07:33.594 "r_mbytes_per_sec": 0, 00:07:33.594 "w_mbytes_per_sec": 0 00:07:33.594 }, 00:07:33.594 "claimed": false, 00:07:33.594 "zoned": false, 00:07:33.594 "supported_io_types": { 00:07:33.594 "read": true, 00:07:33.594 "write": true, 00:07:33.594 "unmap": true, 00:07:33.594 "flush": true, 00:07:33.594 "reset": true, 00:07:33.594 "nvme_admin": true, 00:07:33.594 "nvme_io": true, 00:07:33.594 "nvme_io_md": false, 00:07:33.594 "write_zeroes": true, 00:07:33.594 "zcopy": false, 00:07:33.594 "get_zone_info": false, 00:07:33.594 "zone_management": false, 00:07:33.594 "zone_append": false, 00:07:33.594 "compare": true, 00:07:33.594 "compare_and_write": true, 00:07:33.594 "abort": true, 00:07:33.594 "seek_hole": false, 00:07:33.594 "seek_data": false, 00:07:33.594 "copy": true, 00:07:33.594 "nvme_iov_md": false 00:07:33.594 }, 00:07:33.594 "memory_domains": [ 00:07:33.594 { 00:07:33.594 "dma_device_id": "system", 00:07:33.594 "dma_device_type": 1 00:07:33.594 } 00:07:33.594 ], 00:07:33.594 "driver_specific": { 00:07:33.594 "nvme": [ 00:07:33.594 { 00:07:33.594 "trid": { 00:07:33.594 "trtype": "TCP", 00:07:33.594 "adrfam": "IPv4", 00:07:33.594 "traddr": "10.0.0.3", 00:07:33.594 "trsvcid": "4420", 00:07:33.594 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:33.594 }, 00:07:33.594 "ctrlr_data": { 00:07:33.594 "cntlid": 1, 00:07:33.594 "vendor_id": "0x8086", 00:07:33.594 "model_number": "SPDK bdev Controller", 00:07:33.594 "serial_number": "SPDK0", 00:07:33.594 "firmware_revision": "25.01", 00:07:33.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:33.594 "oacs": { 00:07:33.594 "security": 0, 00:07:33.594 "format": 0, 00:07:33.594 "firmware": 0, 00:07:33.594 "ns_manage": 0 00:07:33.594 }, 00:07:33.594 "multi_ctrlr": true, 00:07:33.594 "ana_reporting": false 00:07:33.594 }, 00:07:33.594 "vs": { 00:07:33.594 "nvme_version": "1.3" 00:07:33.594 }, 00:07:33.594 "ns_data": { 00:07:33.594 "id": 1, 00:07:33.594 "can_share": true 00:07:33.594 } 00:07:33.594 } 00:07:33.594 ], 00:07:33.594 "mp_policy": "active_passive" 00:07:33.594 } 00:07:33.594 } 00:07:33.594 ] 00:07:33.594 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=62963 00:07:33.594 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:33.594 10:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:33.852 Running I/O for 10 seconds... 00:07:34.788 Latency(us) 00:07:34.789 [2024-11-12T10:29:23.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.789 Nvme0n1 : 1.00 6381.00 24.93 0.00 0.00 0.00 0.00 0.00 00:07:34.789 [2024-11-12T10:29:23.547Z] =================================================================================================================== 00:07:34.789 [2024-11-12T10:29:23.547Z] Total : 6381.00 24.93 0.00 0.00 0.00 0.00 0.00 00:07:34.789 00:07:35.724 10:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b145c318-838e-4f4d-8b16-d3fdfe7c98ea 00:07:35.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.724 Nvme0n1 : 2.00 6365.50 24.87 0.00 0.00 0.00 0.00 0.00 00:07:35.724 [2024-11-12T10:29:24.482Z] =================================================================================================================== 00:07:35.724 [2024-11-12T10:29:24.482Z] Total : 6365.50 24.87 0.00 0.00 0.00 0.00 0.00 00:07:35.724 00:07:35.984 true 00:07:35.984 10:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b145c318-838e-4f4d-8b16-d3fdfe7c98ea 00:07:35.984 10:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:36.243 10:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:36.243 10:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:36.243 10:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 62963 00:07:36.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.811 Nvme0n1 : 3.00 6360.33 24.85 0.00 0.00 0.00 0.00 0.00 00:07:36.811 [2024-11-12T10:29:25.569Z] =================================================================================================================== 00:07:36.811 [2024-11-12T10:29:25.569Z] Total : 6360.33 24.85 0.00 0.00 0.00 0.00 0.00 00:07:36.811 00:07:37.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.747 Nvme0n1 : 4.00 6421.25 25.08 0.00 0.00 0.00 0.00 0.00 00:07:37.747 [2024-11-12T10:29:26.505Z] =================================================================================================================== 00:07:37.747 [2024-11-12T10:29:26.505Z] Total : 6421.25 25.08 0.00 0.00 0.00 0.00 0.00 00:07:37.747 00:07:38.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.684 Nvme0n1 : 5.00 6432.40 25.13 0.00 0.00 0.00 0.00 0.00 00:07:38.684 [2024-11-12T10:29:27.442Z] =================================================================================================================== 00:07:38.684 [2024-11-12T10:29:27.442Z] Total : 6432.40 25.13 0.00 0.00 0.00 0.00 0.00 00:07:38.684 00:07:40.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.062 Nvme0n1 : 6.00 6418.67 25.07 0.00 0.00 0.00 0.00 0.00 00:07:40.062 [2024-11-12T10:29:28.820Z] =================================================================================================================== 00:07:40.062 [2024-11-12T10:29:28.820Z] Total : 6418.67 25.07 0.00 0.00 0.00 0.00 0.00 00:07:40.062 00:07:40.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.998 Nvme0n1 : 7.00 6408.86 25.03 0.00 0.00 0.00 0.00 0.00 00:07:40.998 [2024-11-12T10:29:29.756Z] =================================================================================================================== 00:07:40.998 [2024-11-12T10:29:29.756Z] Total : 6408.86 25.03 0.00 0.00 0.00 0.00 0.00 00:07:40.998 00:07:41.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.952 Nvme0n1 : 8.00 6385.62 24.94 0.00 0.00 0.00 0.00 0.00 00:07:41.952 [2024-11-12T10:29:30.710Z] =================================================================================================================== 00:07:41.952 [2024-11-12T10:29:30.710Z] Total : 6385.62 24.94 0.00 0.00 0.00 0.00 0.00 00:07:41.952 00:07:42.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.889 Nvme0n1 : 9.00 6367.56 24.87 0.00 0.00 0.00 0.00 0.00 00:07:42.889 [2024-11-12T10:29:31.647Z] =================================================================================================================== 00:07:42.889 [2024-11-12T10:29:31.647Z] Total : 6367.56 24.87 0.00 0.00 0.00 0.00 0.00 00:07:42.889 00:07:43.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.825 Nvme0n1 : 10.00 6365.80 24.87 0.00 0.00 0.00 0.00 0.00 00:07:43.825 [2024-11-12T10:29:32.583Z] =================================================================================================================== 00:07:43.825 [2024-11-12T10:29:32.583Z] Total : 6365.80 24.87 0.00 0.00 0.00 0.00 0.00 00:07:43.825 00:07:43.825 00:07:43.825 Latency(us) 00:07:43.825 [2024-11-12T10:29:32.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.825 Nvme0n1 : 10.01 6372.25 24.89 0.00 0.00 20081.45 12630.57 69110.69 00:07:43.825 [2024-11-12T10:29:32.583Z] =================================================================================================================== 00:07:43.825 [2024-11-12T10:29:32.583Z] Total : 6372.25 24.89 0.00 0.00 20081.45 12630.57 69110.69 00:07:43.825 { 00:07:43.825 "results": [ 00:07:43.825 { 00:07:43.825 "job": "Nvme0n1", 00:07:43.825 "core_mask": "0x2", 00:07:43.825 "workload": "randwrite", 00:07:43.825 "status": "finished", 00:07:43.825 "queue_depth": 128, 00:07:43.825 "io_size": 4096, 00:07:43.825 "runtime": 10.009972, 00:07:43.825 "iops": 6372.24559669098, 00:07:43.825 "mibps": 24.89158436207414, 00:07:43.825 "io_failed": 0, 00:07:43.825 "io_timeout": 0, 00:07:43.825 "avg_latency_us": 20081.44936814291, 00:07:43.825 "min_latency_us": 12630.574545454545, 00:07:43.825 "max_latency_us": 69110.6909090909 00:07:43.825 } 00:07:43.825 ], 00:07:43.825 "core_count": 1 00:07:43.825 } 00:07:43.825 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 62952 00:07:43.825 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 62952 ']' 00:07:43.825 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 62952 00:07:43.825 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:43.825 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:43.825 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62952 00:07:43.825 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:43.825 killing process with pid 62952 00:07:43.825 Received shutdown signal, test time was about 10.000000 seconds 00:07:43.825 00:07:43.825 Latency(us) 00:07:43.825 [2024-11-12T10:29:32.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.825 [2024-11-12T10:29:32.583Z] =================================================================================================================== 00:07:43.825 [2024-11-12T10:29:32.583Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:43.825 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:43.825 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62952' 00:07:43.825 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 62952 00:07:43.825 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 62952 00:07:44.084 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:44.342 10:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.601 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b145c318-838e-4f4d-8b16-d3fdfe7c98ea 00:07:44.601 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:44.860 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:44.860 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:44.860 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:45.119 [2024-11-12 10:29:33.678653] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:45.119 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b145c318-838e-4f4d-8b16-d3fdfe7c98ea 00:07:45.119 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:45.119 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b145c318-838e-4f4d-8b16-d3fdfe7c98ea 00:07:45.119 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.119 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.119 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.119 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.119 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.119 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.119 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.119 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:45.119 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b145c318-838e-4f4d-8b16-d3fdfe7c98ea 00:07:45.378 request: 00:07:45.378 { 00:07:45.378 "uuid": "b145c318-838e-4f4d-8b16-d3fdfe7c98ea", 00:07:45.378 "method": "bdev_lvol_get_lvstores", 00:07:45.378 "req_id": 1 00:07:45.378 } 00:07:45.378 Got JSON-RPC error response 00:07:45.378 response: 00:07:45.378 { 00:07:45.378 "code": -19, 00:07:45.378 "message": "No such device" 00:07:45.378 } 00:07:45.378 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:45.378 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:45.378 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:45.378 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:45.378 10:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.636 aio_bdev 00:07:45.636 10:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7bed7722-9f83-4e49-8cf0-92c6ea953a7c 00:07:45.636 10:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=7bed7722-9f83-4e49-8cf0-92c6ea953a7c 00:07:45.636 10:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:45.636 10:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:45.636 10:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:45.637 10:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:45.637 10:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:45.895 10:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7bed7722-9f83-4e49-8cf0-92c6ea953a7c -t 2000 00:07:46.154 [ 00:07:46.154 { 00:07:46.154 "name": "7bed7722-9f83-4e49-8cf0-92c6ea953a7c", 00:07:46.154 "aliases": [ 00:07:46.154 "lvs/lvol" 00:07:46.154 ], 00:07:46.154 "product_name": "Logical Volume", 00:07:46.154 "block_size": 4096, 00:07:46.154 "num_blocks": 38912, 00:07:46.154 "uuid": "7bed7722-9f83-4e49-8cf0-92c6ea953a7c", 00:07:46.154 "assigned_rate_limits": { 00:07:46.154 "rw_ios_per_sec": 0, 00:07:46.154 "rw_mbytes_per_sec": 0, 00:07:46.154 "r_mbytes_per_sec": 0, 00:07:46.154 "w_mbytes_per_sec": 0 00:07:46.154 }, 00:07:46.154 "claimed": false, 00:07:46.154 "zoned": false, 00:07:46.154 "supported_io_types": { 00:07:46.154 "read": true, 00:07:46.154 "write": true, 00:07:46.154 "unmap": true, 00:07:46.154 "flush": false, 00:07:46.154 "reset": true, 00:07:46.154 "nvme_admin": false, 00:07:46.154 "nvme_io": false, 00:07:46.154 "nvme_io_md": false, 00:07:46.154 "write_zeroes": true, 00:07:46.154 "zcopy": false, 00:07:46.154 "get_zone_info": false, 00:07:46.154 "zone_management": false, 00:07:46.154 "zone_append": false, 00:07:46.154 "compare": false, 00:07:46.154 "compare_and_write": false, 00:07:46.154 "abort": false, 00:07:46.154 "seek_hole": true, 00:07:46.154 "seek_data": true, 00:07:46.154 "copy": false, 00:07:46.154 "nvme_iov_md": false 00:07:46.154 }, 00:07:46.154 "driver_specific": { 00:07:46.154 "lvol": { 00:07:46.154 "lvol_store_uuid": "b145c318-838e-4f4d-8b16-d3fdfe7c98ea", 00:07:46.154 "base_bdev": "aio_bdev", 00:07:46.154 "thin_provision": false, 00:07:46.154 "num_allocated_clusters": 38, 00:07:46.154 "snapshot": false, 00:07:46.154 "clone": false, 00:07:46.154 "esnap_clone": false 00:07:46.154 } 00:07:46.154 } 00:07:46.154 } 00:07:46.154 ] 00:07:46.154 10:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:46.154 10:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b145c318-838e-4f4d-8b16-d3fdfe7c98ea 00:07:46.154 10:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:46.412 10:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:46.412 10:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:46.412 10:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b145c318-838e-4f4d-8b16-d3fdfe7c98ea 00:07:46.671 10:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:46.671 10:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7bed7722-9f83-4e49-8cf0-92c6ea953a7c 00:07:46.929 10:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b145c318-838e-4f4d-8b16-d3fdfe7c98ea 00:07:47.188 10:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:47.476 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:47.755 ************************************ 00:07:47.755 END TEST lvs_grow_clean 00:07:47.755 ************************************ 00:07:47.755 00:07:47.755 real 0m17.744s 00:07:47.755 user 0m16.628s 00:07:47.755 sys 0m2.342s 00:07:47.755 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.755 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.014 ************************************ 00:07:48.014 START TEST lvs_grow_dirty 00:07:48.014 ************************************ 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:48.014 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:48.273 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:48.273 10:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:48.532 10:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:07:48.532 10:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:48.532 10:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:07:48.791 10:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:48.791 10:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:48.791 10:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 lvol 150 00:07:49.051 10:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=73799dd6-b097-4270-9d45-af0e9054fb51 00:07:49.051 10:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:49.051 10:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:49.309 [2024-11-12 10:29:38.013032] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:49.309 [2024-11-12 10:29:38.013423] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:49.309 true 00:07:49.309 10:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:07:49.309 10:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:49.568 10:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:49.568 10:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.825 10:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 73799dd6-b097-4270-9d45-af0e9054fb51 00:07:50.084 10:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:50.342 [2024-11-12 10:29:39.049626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:50.342 10:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:50.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.601 10:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63215 00:07:50.601 10:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:50.601 10:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.601 10:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63215 /var/tmp/bdevperf.sock 00:07:50.601 10:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63215 ']' 00:07:50.601 10:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.601 10:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.601 10:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.601 10:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.601 10:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:50.859 [2024-11-12 10:29:39.397293] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:07:50.859 [2024-11-12 10:29:39.397550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63215 ] 00:07:50.859 [2024-11-12 10:29:39.549224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.859 [2024-11-12 10:29:39.587885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.117 [2024-11-12 10:29:39.621640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.684 10:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:51.684 10:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:51.684 10:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:51.943 Nvme0n1 00:07:51.943 10:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:52.202 [ 00:07:52.202 { 00:07:52.202 "name": "Nvme0n1", 00:07:52.202 "aliases": [ 00:07:52.202 "73799dd6-b097-4270-9d45-af0e9054fb51" 00:07:52.202 ], 00:07:52.202 "product_name": "NVMe disk", 00:07:52.202 "block_size": 4096, 00:07:52.202 "num_blocks": 38912, 00:07:52.202 "uuid": "73799dd6-b097-4270-9d45-af0e9054fb51", 00:07:52.202 "numa_id": -1, 00:07:52.202 "assigned_rate_limits": { 00:07:52.202 "rw_ios_per_sec": 0, 00:07:52.202 "rw_mbytes_per_sec": 0, 00:07:52.202 "r_mbytes_per_sec": 0, 00:07:52.202 "w_mbytes_per_sec": 0 00:07:52.202 }, 00:07:52.202 "claimed": false, 00:07:52.202 "zoned": false, 00:07:52.202 "supported_io_types": { 00:07:52.202 "read": true, 00:07:52.202 "write": true, 00:07:52.202 "unmap": true, 00:07:52.202 "flush": true, 00:07:52.202 "reset": true, 00:07:52.202 "nvme_admin": true, 00:07:52.202 "nvme_io": true, 00:07:52.202 "nvme_io_md": false, 00:07:52.202 "write_zeroes": true, 00:07:52.202 "zcopy": false, 00:07:52.202 "get_zone_info": false, 00:07:52.202 "zone_management": false, 00:07:52.202 "zone_append": false, 00:07:52.202 "compare": true, 00:07:52.202 "compare_and_write": true, 00:07:52.202 "abort": true, 00:07:52.202 "seek_hole": false, 00:07:52.202 "seek_data": false, 00:07:52.202 "copy": true, 00:07:52.202 "nvme_iov_md": false 00:07:52.202 }, 00:07:52.202 "memory_domains": [ 00:07:52.202 { 00:07:52.202 "dma_device_id": "system", 00:07:52.202 "dma_device_type": 1 00:07:52.202 } 00:07:52.202 ], 00:07:52.202 "driver_specific": { 00:07:52.202 "nvme": [ 00:07:52.202 { 00:07:52.202 "trid": { 00:07:52.202 "trtype": "TCP", 00:07:52.202 "adrfam": "IPv4", 00:07:52.202 "traddr": "10.0.0.3", 00:07:52.202 "trsvcid": "4420", 00:07:52.202 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:52.202 }, 00:07:52.202 "ctrlr_data": { 00:07:52.202 "cntlid": 1, 00:07:52.202 "vendor_id": "0x8086", 00:07:52.202 "model_number": "SPDK bdev Controller", 00:07:52.202 "serial_number": "SPDK0", 00:07:52.202 "firmware_revision": "25.01", 00:07:52.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:52.202 "oacs": { 00:07:52.202 "security": 0, 00:07:52.202 "format": 0, 00:07:52.202 "firmware": 0, 00:07:52.202 "ns_manage": 0 00:07:52.202 }, 00:07:52.202 "multi_ctrlr": true, 00:07:52.202 "ana_reporting": false 00:07:52.202 }, 00:07:52.202 "vs": { 00:07:52.202 "nvme_version": "1.3" 00:07:52.202 }, 00:07:52.202 "ns_data": { 00:07:52.202 "id": 1, 00:07:52.202 "can_share": true 00:07:52.202 } 00:07:52.202 } 00:07:52.202 ], 00:07:52.202 "mp_policy": "active_passive" 00:07:52.202 } 00:07:52.202 } 00:07:52.202 ] 00:07:52.202 10:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63238 00:07:52.202 10:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:52.202 10:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:52.461 Running I/O for 10 seconds... 00:07:53.396 Latency(us) 00:07:53.396 [2024-11-12T10:29:42.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.396 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:53.396 [2024-11-12T10:29:42.154Z] =================================================================================================================== 00:07:53.396 [2024-11-12T10:29:42.154Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:53.396 00:07:54.333 10:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:07:54.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.333 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:07:54.333 [2024-11-12T10:29:43.091Z] =================================================================================================================== 00:07:54.333 [2024-11-12T10:29:43.091Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:07:54.333 00:07:54.592 true 00:07:54.592 10:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:07:54.592 10:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:55.161 10:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:55.161 10:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:55.161 10:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63238 00:07:55.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.420 Nvme0n1 : 3.00 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:07:55.420 [2024-11-12T10:29:44.178Z] =================================================================================================================== 00:07:55.420 [2024-11-12T10:29:44.178Z] Total : 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:07:55.420 00:07:56.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.355 Nvme0n1 : 4.00 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:07:56.355 [2024-11-12T10:29:45.113Z] =================================================================================================================== 00:07:56.355 [2024-11-12T10:29:45.113Z] Total : 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:07:56.355 00:07:57.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.731 Nvme0n1 : 5.00 6553.20 25.60 0.00 0.00 0.00 0.00 0.00 00:07:57.731 [2024-11-12T10:29:46.489Z] =================================================================================================================== 00:07:57.731 [2024-11-12T10:29:46.489Z] Total : 6553.20 25.60 0.00 0.00 0.00 0.00 0.00 00:07:57.731 00:07:58.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.298 Nvme0n1 : 6.00 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:07:58.298 [2024-11-12T10:29:47.056Z] =================================================================================================================== 00:07:58.298 [2024-11-12T10:29:47.056Z] Total : 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:07:58.298 00:07:59.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.673 Nvme0n1 : 7.00 6513.29 25.44 0.00 0.00 0.00 0.00 0.00 00:07:59.673 [2024-11-12T10:29:48.431Z] =================================================================================================================== 00:07:59.673 [2024-11-12T10:29:48.431Z] Total : 6513.29 25.44 0.00 0.00 0.00 0.00 0.00 00:07:59.673 00:08:00.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.610 Nvme0n1 : 8.00 6318.12 24.68 0.00 0.00 0.00 0.00 0.00 00:08:00.610 [2024-11-12T10:29:49.368Z] =================================================================================================================== 00:08:00.610 [2024-11-12T10:29:49.368Z] Total : 6318.12 24.68 0.00 0.00 0.00 0.00 0.00 00:08:00.610 00:08:01.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.546 Nvme0n1 : 9.00 6321.67 24.69 0.00 0.00 0.00 0.00 0.00 00:08:01.546 [2024-11-12T10:29:50.304Z] =================================================================================================================== 00:08:01.546 [2024-11-12T10:29:50.304Z] Total : 6321.67 24.69 0.00 0.00 0.00 0.00 0.00 00:08:01.546 00:08:02.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.482 Nvme0n1 : 10.00 6311.80 24.66 0.00 0.00 0.00 0.00 0.00 00:08:02.482 [2024-11-12T10:29:51.240Z] =================================================================================================================== 00:08:02.482 [2024-11-12T10:29:51.240Z] Total : 6311.80 24.66 0.00 0.00 0.00 0.00 0.00 00:08:02.482 00:08:02.482 00:08:02.482 Latency(us) 00:08:02.482 [2024-11-12T10:29:51.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.482 Nvme0n1 : 10.02 6314.82 24.67 0.00 0.00 20263.93 15371.17 265003.75 00:08:02.482 [2024-11-12T10:29:51.240Z] =================================================================================================================== 00:08:02.482 [2024-11-12T10:29:51.240Z] Total : 6314.82 24.67 0.00 0.00 20263.93 15371.17 265003.75 00:08:02.482 { 00:08:02.482 "results": [ 00:08:02.482 { 00:08:02.482 "job": "Nvme0n1", 00:08:02.482 "core_mask": "0x2", 00:08:02.482 "workload": "randwrite", 00:08:02.482 "status": "finished", 00:08:02.482 "queue_depth": 128, 00:08:02.482 "io_size": 4096, 00:08:02.482 "runtime": 10.015491, 00:08:02.482 "iops": 6314.817715876336, 00:08:02.482 "mibps": 24.667256702641936, 00:08:02.482 "io_failed": 0, 00:08:02.482 "io_timeout": 0, 00:08:02.482 "avg_latency_us": 20263.92664763564, 00:08:02.482 "min_latency_us": 15371.17090909091, 00:08:02.482 "max_latency_us": 265003.75272727275 00:08:02.482 } 00:08:02.482 ], 00:08:02.482 "core_count": 1 00:08:02.482 } 00:08:02.482 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63215 00:08:02.482 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 63215 ']' 00:08:02.482 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 63215 00:08:02.482 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:02.482 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:02.482 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63215 00:08:02.482 killing process with pid 63215 00:08:02.482 Received shutdown signal, test time was about 10.000000 seconds 00:08:02.482 00:08:02.482 Latency(us) 00:08:02.482 [2024-11-12T10:29:51.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.482 [2024-11-12T10:29:51.240Z] =================================================================================================================== 00:08:02.482 [2024-11-12T10:29:51.240Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:02.482 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:02.482 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:02.482 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63215' 00:08:02.482 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 63215 00:08:02.482 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 63215 00:08:02.741 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:03.000 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:03.259 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:08:03.259 10:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:03.518 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:03.518 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:03.518 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 62872 00:08:03.518 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 62872 00:08:03.518 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 62872 Killed "${NVMF_APP[@]}" "$@" 00:08:03.518 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:03.518 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:03.518 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.519 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:03.519 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:03.519 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63372 00:08:03.519 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:03.519 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63372 00:08:03.519 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63372 ']' 00:08:03.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.519 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.519 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.519 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.519 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.519 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:03.519 [2024-11-12 10:29:52.199621] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:08:03.519 [2024-11-12 10:29:52.199909] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.786 [2024-11-12 10:29:52.344826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.786 [2024-11-12 10:29:52.371025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.786 [2024-11-12 10:29:52.371072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.786 [2024-11-12 10:29:52.371097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.786 [2024-11-12 10:29:52.371104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.786 [2024-11-12 10:29:52.371109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.786 [2024-11-12 10:29:52.371417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.786 [2024-11-12 10:29:52.398319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.786 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:03.786 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:03.786 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.786 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:03.786 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:03.786 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.786 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.075 [2024-11-12 10:29:52.709365] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:04.075 [2024-11-12 10:29:52.709953] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:04.075 [2024-11-12 10:29:52.710323] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:04.075 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:04.075 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 73799dd6-b097-4270-9d45-af0e9054fb51 00:08:04.075 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=73799dd6-b097-4270-9d45-af0e9054fb51 00:08:04.075 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:04.075 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:04.075 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:04.075 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:04.075 10:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:04.334 10:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73799dd6-b097-4270-9d45-af0e9054fb51 -t 2000 00:08:04.593 [ 00:08:04.593 { 00:08:04.593 "name": "73799dd6-b097-4270-9d45-af0e9054fb51", 00:08:04.593 "aliases": [ 00:08:04.593 "lvs/lvol" 00:08:04.593 ], 00:08:04.593 "product_name": "Logical Volume", 00:08:04.593 "block_size": 4096, 00:08:04.593 "num_blocks": 38912, 00:08:04.593 "uuid": "73799dd6-b097-4270-9d45-af0e9054fb51", 00:08:04.593 "assigned_rate_limits": { 00:08:04.593 "rw_ios_per_sec": 0, 00:08:04.593 "rw_mbytes_per_sec": 0, 00:08:04.593 "r_mbytes_per_sec": 0, 00:08:04.593 "w_mbytes_per_sec": 0 00:08:04.593 }, 00:08:04.593 "claimed": false, 00:08:04.593 "zoned": false, 00:08:04.593 "supported_io_types": { 00:08:04.593 "read": true, 00:08:04.593 "write": true, 00:08:04.593 "unmap": true, 00:08:04.593 "flush": false, 00:08:04.593 "reset": true, 00:08:04.593 "nvme_admin": false, 00:08:04.593 "nvme_io": false, 00:08:04.593 "nvme_io_md": false, 00:08:04.593 "write_zeroes": true, 00:08:04.593 "zcopy": false, 00:08:04.593 "get_zone_info": false, 00:08:04.593 "zone_management": false, 00:08:04.593 "zone_append": false, 00:08:04.593 "compare": false, 00:08:04.593 "compare_and_write": false, 00:08:04.593 "abort": false, 00:08:04.593 "seek_hole": true, 00:08:04.593 "seek_data": true, 00:08:04.593 "copy": false, 00:08:04.593 "nvme_iov_md": false 00:08:04.593 }, 00:08:04.593 "driver_specific": { 00:08:04.593 "lvol": { 00:08:04.593 "lvol_store_uuid": "e1f7632d-27d6-4c0b-bd25-7885900c8fb3", 00:08:04.593 "base_bdev": "aio_bdev", 00:08:04.593 "thin_provision": false, 00:08:04.593 "num_allocated_clusters": 38, 00:08:04.593 "snapshot": false, 00:08:04.593 "clone": false, 00:08:04.593 "esnap_clone": false 00:08:04.593 } 00:08:04.593 } 00:08:04.593 } 00:08:04.593 ] 00:08:04.593 10:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:04.593 10:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:08:04.593 10:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:04.852 10:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:04.853 10:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:08:04.853 10:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:05.112 10:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:05.112 10:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:05.371 [2024-11-12 10:29:54.003516] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:05.371 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:08:05.371 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:05.371 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:08:05.371 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:05.371 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.371 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:05.371 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.371 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:05.371 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.371 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:05.371 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:05.371 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:08:05.631 request: 00:08:05.631 { 00:08:05.631 "uuid": "e1f7632d-27d6-4c0b-bd25-7885900c8fb3", 00:08:05.631 "method": "bdev_lvol_get_lvstores", 00:08:05.631 "req_id": 1 00:08:05.631 } 00:08:05.631 Got JSON-RPC error response 00:08:05.631 response: 00:08:05.631 { 00:08:05.631 "code": -19, 00:08:05.631 "message": "No such device" 00:08:05.631 } 00:08:05.631 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:05.631 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.631 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:05.631 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.631 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:05.889 aio_bdev 00:08:05.889 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 73799dd6-b097-4270-9d45-af0e9054fb51 00:08:05.889 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=73799dd6-b097-4270-9d45-af0e9054fb51 00:08:05.889 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:05.889 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:05.889 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:05.889 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:05.889 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:06.149 10:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73799dd6-b097-4270-9d45-af0e9054fb51 -t 2000 00:08:06.408 [ 00:08:06.408 { 00:08:06.408 "name": "73799dd6-b097-4270-9d45-af0e9054fb51", 00:08:06.408 "aliases": [ 00:08:06.408 "lvs/lvol" 00:08:06.408 ], 00:08:06.408 "product_name": "Logical Volume", 00:08:06.408 "block_size": 4096, 00:08:06.408 "num_blocks": 38912, 00:08:06.408 "uuid": "73799dd6-b097-4270-9d45-af0e9054fb51", 00:08:06.408 "assigned_rate_limits": { 00:08:06.408 "rw_ios_per_sec": 0, 00:08:06.408 "rw_mbytes_per_sec": 0, 00:08:06.408 "r_mbytes_per_sec": 0, 00:08:06.408 "w_mbytes_per_sec": 0 00:08:06.408 }, 00:08:06.408 "claimed": false, 00:08:06.408 "zoned": false, 00:08:06.408 "supported_io_types": { 00:08:06.408 "read": true, 00:08:06.408 "write": true, 00:08:06.408 "unmap": true, 00:08:06.408 "flush": false, 00:08:06.408 "reset": true, 00:08:06.408 "nvme_admin": false, 00:08:06.408 "nvme_io": false, 00:08:06.408 "nvme_io_md": false, 00:08:06.408 "write_zeroes": true, 00:08:06.408 "zcopy": false, 00:08:06.408 "get_zone_info": false, 00:08:06.408 "zone_management": false, 00:08:06.408 "zone_append": false, 00:08:06.408 "compare": false, 00:08:06.408 "compare_and_write": false, 00:08:06.408 "abort": false, 00:08:06.408 "seek_hole": true, 00:08:06.408 "seek_data": true, 00:08:06.408 "copy": false, 00:08:06.408 "nvme_iov_md": false 00:08:06.408 }, 00:08:06.408 "driver_specific": { 00:08:06.408 "lvol": { 00:08:06.408 "lvol_store_uuid": "e1f7632d-27d6-4c0b-bd25-7885900c8fb3", 00:08:06.408 "base_bdev": "aio_bdev", 00:08:06.408 "thin_provision": false, 00:08:06.408 "num_allocated_clusters": 38, 00:08:06.408 "snapshot": false, 00:08:06.408 "clone": false, 00:08:06.408 "esnap_clone": false 00:08:06.408 } 00:08:06.408 } 00:08:06.408 } 00:08:06.408 ] 00:08:06.408 10:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:06.408 10:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:08:06.408 10:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:06.667 10:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:06.667 10:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:08:06.667 10:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:06.926 10:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:06.926 10:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 73799dd6-b097-4270-9d45-af0e9054fb51 00:08:07.185 10:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e1f7632d-27d6-4c0b-bd25-7885900c8fb3 00:08:07.444 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.703 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:08.273 ************************************ 00:08:08.273 END TEST lvs_grow_dirty 00:08:08.273 ************************************ 00:08:08.273 00:08:08.273 real 0m20.251s 00:08:08.273 user 0m40.730s 00:08:08.273 sys 0m9.438s 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:08.273 nvmf_trace.0 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:08.273 10:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:08.841 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:08.841 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:08.841 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:08.841 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:08.841 rmmod nvme_tcp 00:08:08.841 rmmod nvme_fabrics 00:08:08.841 rmmod nvme_keyring 00:08:08.841 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63372 ']' 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63372 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 63372 ']' 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 63372 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63372 00:08:08.842 killing process with pid 63372 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63372' 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 63372 00:08:08.842 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 63372 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:09.100 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:09.359 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:09.359 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.359 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.359 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.359 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:09.359 00:08:09.359 real 0m40.449s 00:08:09.359 user 1m3.499s 00:08:09.359 sys 0m12.875s 00:08:09.359 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.359 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.359 ************************************ 00:08:09.359 END TEST nvmf_lvs_grow 00:08:09.359 ************************************ 00:08:09.359 10:29:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:09.359 10:29:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:09.359 10:29:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.359 10:29:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.359 ************************************ 00:08:09.359 START TEST nvmf_bdev_io_wait 00:08:09.359 ************************************ 00:08:09.359 10:29:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:09.359 * Looking for test storage... 00:08:09.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.359 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:09.359 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:09.359 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:09.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.620 --rc genhtml_branch_coverage=1 00:08:09.620 --rc genhtml_function_coverage=1 00:08:09.620 --rc genhtml_legend=1 00:08:09.620 --rc geninfo_all_blocks=1 00:08:09.620 --rc geninfo_unexecuted_blocks=1 00:08:09.620 00:08:09.620 ' 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:09.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.620 --rc genhtml_branch_coverage=1 00:08:09.620 --rc genhtml_function_coverage=1 00:08:09.620 --rc genhtml_legend=1 00:08:09.620 --rc geninfo_all_blocks=1 00:08:09.620 --rc geninfo_unexecuted_blocks=1 00:08:09.620 00:08:09.620 ' 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:09.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.620 --rc genhtml_branch_coverage=1 00:08:09.620 --rc genhtml_function_coverage=1 00:08:09.620 --rc genhtml_legend=1 00:08:09.620 --rc geninfo_all_blocks=1 00:08:09.620 --rc geninfo_unexecuted_blocks=1 00:08:09.620 00:08:09.620 ' 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:09.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.620 --rc genhtml_branch_coverage=1 00:08:09.620 --rc genhtml_function_coverage=1 00:08:09.620 --rc genhtml_legend=1 00:08:09.620 --rc geninfo_all_blocks=1 00:08:09.620 --rc geninfo_unexecuted_blocks=1 00:08:09.620 00:08:09.620 ' 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:08:09.620 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.621 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:09.621 Cannot find device "nvmf_init_br" 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:09.621 Cannot find device "nvmf_init_br2" 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:09.621 Cannot find device "nvmf_tgt_br" 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:09.621 Cannot find device "nvmf_tgt_br2" 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:09.621 Cannot find device "nvmf_init_br" 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:09.621 Cannot find device "nvmf_init_br2" 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:09.621 Cannot find device "nvmf_tgt_br" 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:09.621 Cannot find device "nvmf_tgt_br2" 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:09.621 Cannot find device "nvmf_br" 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:09.621 Cannot find device "nvmf_init_if" 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:09.621 Cannot find device "nvmf_init_if2" 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:09.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:09.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.621 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:09.622 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:09.622 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:09.622 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:09.622 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:09.622 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:09.881 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:09.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:09.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:08:09.882 00:08:09.882 --- 10.0.0.3 ping statistics --- 00:08:09.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.882 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:09.882 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:09.882 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:08:09.882 00:08:09.882 --- 10.0.0.4 ping statistics --- 00:08:09.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.882 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:09.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:09.882 00:08:09.882 --- 10.0.0.1 ping statistics --- 00:08:09.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.882 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:09.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:08:09.882 00:08:09.882 --- 10.0.0.2 ping statistics --- 00:08:09.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.882 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63742 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63742 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 63742 ']' 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.882 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 [2024-11-12 10:29:58.690950] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:08:10.142 [2024-11-12 10:29:58.691281] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.142 [2024-11-12 10:29:58.836610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.142 [2024-11-12 10:29:58.867142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.142 [2024-11-12 10:29:58.867224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.142 [2024-11-12 10:29:58.867251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.142 [2024-11-12 10:29:58.867258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.142 [2024-11-12 10:29:58.867264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.142 [2024-11-12 10:29:58.867974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.142 [2024-11-12 10:29:58.868776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.142 [2024-11-12 10:29:58.868930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.142 [2024-11-12 10:29:58.868937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.402 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:10.402 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:10.402 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:10.402 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.402 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.402 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:10.402 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.402 10:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 [2024-11-12 10:29:59.038664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 [2024-11-12 10:29:59.053459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 Malloc0 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 [2024-11-12 10:29:59.108511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63764 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63766 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:10.402 { 00:08:10.402 "params": { 00:08:10.402 "name": "Nvme$subsystem", 00:08:10.402 "trtype": "$TEST_TRANSPORT", 00:08:10.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.402 "adrfam": "ipv4", 00:08:10.402 "trsvcid": "$NVMF_PORT", 00:08:10.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.402 "hdgst": ${hdgst:-false}, 00:08:10.402 "ddgst": ${ddgst:-false} 00:08:10.402 }, 00:08:10.402 "method": "bdev_nvme_attach_controller" 00:08:10.402 } 00:08:10.402 EOF 00:08:10.402 )") 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63768 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63771 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:10.402 { 00:08:10.402 "params": { 00:08:10.402 "name": "Nvme$subsystem", 00:08:10.402 "trtype": "$TEST_TRANSPORT", 00:08:10.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.402 "adrfam": "ipv4", 00:08:10.402 "trsvcid": "$NVMF_PORT", 00:08:10.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.402 "hdgst": ${hdgst:-false}, 00:08:10.402 "ddgst": ${ddgst:-false} 00:08:10.402 }, 00:08:10.402 "method": "bdev_nvme_attach_controller" 00:08:10.402 } 00:08:10.402 EOF 00:08:10.402 )") 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:10.402 { 00:08:10.402 "params": { 00:08:10.402 "name": "Nvme$subsystem", 00:08:10.402 "trtype": "$TEST_TRANSPORT", 00:08:10.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.402 "adrfam": "ipv4", 00:08:10.402 "trsvcid": "$NVMF_PORT", 00:08:10.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.402 "hdgst": ${hdgst:-false}, 00:08:10.402 "ddgst": ${ddgst:-false} 00:08:10.402 }, 00:08:10.402 "method": "bdev_nvme_attach_controller" 00:08:10.402 } 00:08:10.402 EOF 00:08:10.402 )") 00:08:10.402 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:10.402 { 00:08:10.402 "params": { 00:08:10.402 "name": "Nvme$subsystem", 00:08:10.402 "trtype": "$TEST_TRANSPORT", 00:08:10.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.402 "adrfam": "ipv4", 00:08:10.402 "trsvcid": "$NVMF_PORT", 00:08:10.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.402 "hdgst": ${hdgst:-false}, 00:08:10.402 "ddgst": ${ddgst:-false} 00:08:10.402 }, 00:08:10.402 "method": "bdev_nvme_attach_controller" 00:08:10.403 } 00:08:10.403 EOF 00:08:10.403 )") 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:10.403 "params": { 00:08:10.403 "name": "Nvme1", 00:08:10.403 "trtype": "tcp", 00:08:10.403 "traddr": "10.0.0.3", 00:08:10.403 "adrfam": "ipv4", 00:08:10.403 "trsvcid": "4420", 00:08:10.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.403 "hdgst": false, 00:08:10.403 "ddgst": false 00:08:10.403 }, 00:08:10.403 "method": "bdev_nvme_attach_controller" 00:08:10.403 }' 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:10.403 "params": { 00:08:10.403 "name": "Nvme1", 00:08:10.403 "trtype": "tcp", 00:08:10.403 "traddr": "10.0.0.3", 00:08:10.403 "adrfam": "ipv4", 00:08:10.403 "trsvcid": "4420", 00:08:10.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.403 "hdgst": false, 00:08:10.403 "ddgst": false 00:08:10.403 }, 00:08:10.403 "method": "bdev_nvme_attach_controller" 00:08:10.403 }' 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:10.403 "params": { 00:08:10.403 "name": "Nvme1", 00:08:10.403 "trtype": "tcp", 00:08:10.403 "traddr": "10.0.0.3", 00:08:10.403 "adrfam": "ipv4", 00:08:10.403 "trsvcid": "4420", 00:08:10.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.403 "hdgst": false, 00:08:10.403 "ddgst": false 00:08:10.403 }, 00:08:10.403 "method": "bdev_nvme_attach_controller" 00:08:10.403 }' 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:10.403 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:10.403 "params": { 00:08:10.403 "name": "Nvme1", 00:08:10.403 "trtype": "tcp", 00:08:10.403 "traddr": "10.0.0.3", 00:08:10.403 "adrfam": "ipv4", 00:08:10.403 "trsvcid": "4420", 00:08:10.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.403 "hdgst": false, 00:08:10.403 "ddgst": false 00:08:10.403 }, 00:08:10.403 "method": "bdev_nvme_attach_controller" 00:08:10.403 }' 00:08:10.662 [2024-11-12 10:29:59.177070] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:08:10.662 [2024-11-12 10:29:59.177434] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:10.662 [2024-11-12 10:29:59.181506] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:08:10.662 [2024-11-12 10:29:59.181739] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:10.662 10:29:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63764 00:08:10.662 [2024-11-12 10:29:59.198348] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:08:10.662 [2024-11-12 10:29:59.198594] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:10.662 [2024-11-12 10:29:59.207614] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:08:10.662 [2024-11-12 10:29:59.208232] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:10.662 [2024-11-12 10:29:59.374870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.662 [2024-11-12 10:29:59.406716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:10.662 [2024-11-12 10:29:59.417582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.921 [2024-11-12 10:29:59.421000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.921 [2024-11-12 10:29:59.448272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:10.921 [2024-11-12 10:29:59.460419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.921 [2024-11-12 10:29:59.462133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.921 [2024-11-12 10:29:59.491815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:10.921 [2024-11-12 10:29:59.505587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.921 [2024-11-12 10:29:59.514261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.921 Running I/O for 1 seconds... 00:08:10.921 [2024-11-12 10:29:59.544936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:10.921 [2024-11-12 10:29:59.558978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.921 Running I/O for 1 seconds... 00:08:10.921 Running I/O for 1 seconds... 00:08:10.921 Running I/O for 1 seconds... 00:08:11.913 166672.00 IOPS, 651.06 MiB/s 00:08:11.913 Latency(us) 00:08:11.913 [2024-11-12T10:30:00.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.913 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:11.913 Nvme1n1 : 1.00 166291.05 649.57 0.00 0.00 765.58 372.36 2263.97 00:08:11.913 [2024-11-12T10:30:00.671Z] =================================================================================================================== 00:08:11.913 [2024-11-12T10:30:00.671Z] Total : 166291.05 649.57 0.00 0.00 765.58 372.36 2263.97 00:08:11.913 10204.00 IOPS, 39.86 MiB/s 00:08:11.913 Latency(us) 00:08:11.913 [2024-11-12T10:30:00.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.913 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:11.913 Nvme1n1 : 1.01 10256.72 40.07 0.00 0.00 12425.71 7030.23 20614.05 00:08:11.913 [2024-11-12T10:30:00.671Z] =================================================================================================================== 00:08:11.913 [2024-11-12T10:30:00.671Z] Total : 10256.72 40.07 0.00 0.00 12425.71 7030.23 20614.05 00:08:11.913 7005.00 IOPS, 27.36 MiB/s 00:08:11.913 Latency(us) 00:08:11.913 [2024-11-12T10:30:00.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.913 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:11.913 Nvme1n1 : 1.01 7052.87 27.55 0.00 0.00 18039.72 9651.67 27644.28 00:08:11.913 [2024-11-12T10:30:00.671Z] =================================================================================================================== 00:08:11.913 [2024-11-12T10:30:00.671Z] Total : 7052.87 27.55 0.00 0.00 18039.72 9651.67 27644.28 00:08:12.172 8477.00 IOPS, 33.11 MiB/s [2024-11-12T10:30:00.930Z] 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63766 00:08:12.172 00:08:12.172 Latency(us) 00:08:12.172 [2024-11-12T10:30:00.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.172 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:12.172 Nvme1n1 : 1.01 8559.93 33.44 0.00 0.00 14893.65 6553.60 24903.68 00:08:12.172 [2024-11-12T10:30:00.930Z] =================================================================================================================== 00:08:12.172 [2024-11-12T10:30:00.930Z] Total : 8559.93 33.44 0.00 0.00 14893.65 6553.60 24903.68 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63768 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63771 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.172 rmmod nvme_tcp 00:08:12.172 rmmod nvme_fabrics 00:08:12.172 rmmod nvme_keyring 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63742 ']' 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63742 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 63742 ']' 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 63742 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:12.172 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63742 00:08:12.431 killing process with pid 63742 00:08:12.431 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:12.431 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:12.431 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63742' 00:08:12.431 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 63742 00:08:12.431 10:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 63742 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:12.431 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:12.690 00:08:12.690 real 0m3.327s 00:08:12.690 user 0m12.942s 00:08:12.690 sys 0m2.083s 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.690 ************************************ 00:08:12.690 END TEST nvmf_bdev_io_wait 00:08:12.690 ************************************ 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.690 ************************************ 00:08:12.690 START TEST nvmf_queue_depth 00:08:12.690 ************************************ 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:12.690 * Looking for test storage... 00:08:12.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:12.690 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:12.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.950 --rc genhtml_branch_coverage=1 00:08:12.950 --rc genhtml_function_coverage=1 00:08:12.950 --rc genhtml_legend=1 00:08:12.950 --rc geninfo_all_blocks=1 00:08:12.950 --rc geninfo_unexecuted_blocks=1 00:08:12.950 00:08:12.950 ' 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:12.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.950 --rc genhtml_branch_coverage=1 00:08:12.950 --rc genhtml_function_coverage=1 00:08:12.950 --rc genhtml_legend=1 00:08:12.950 --rc geninfo_all_blocks=1 00:08:12.950 --rc geninfo_unexecuted_blocks=1 00:08:12.950 00:08:12.950 ' 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:12.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.950 --rc genhtml_branch_coverage=1 00:08:12.950 --rc genhtml_function_coverage=1 00:08:12.950 --rc genhtml_legend=1 00:08:12.950 --rc geninfo_all_blocks=1 00:08:12.950 --rc geninfo_unexecuted_blocks=1 00:08:12.950 00:08:12.950 ' 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:12.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.950 --rc genhtml_branch_coverage=1 00:08:12.950 --rc genhtml_function_coverage=1 00:08:12.950 --rc genhtml_legend=1 00:08:12.950 --rc geninfo_all_blocks=1 00:08:12.950 --rc geninfo_unexecuted_blocks=1 00:08:12.950 00:08:12.950 ' 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.950 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.951 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:12.951 Cannot find device "nvmf_init_br" 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:12.951 Cannot find device "nvmf_init_br2" 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:12.951 Cannot find device "nvmf_tgt_br" 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.951 Cannot find device "nvmf_tgt_br2" 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:12.951 Cannot find device "nvmf_init_br" 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:12.951 Cannot find device "nvmf_init_br2" 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:12.951 Cannot find device "nvmf_tgt_br" 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:12.951 Cannot find device "nvmf_tgt_br2" 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:12.951 Cannot find device "nvmf_br" 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:12.951 Cannot find device "nvmf_init_if" 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:12.951 Cannot find device "nvmf_init_if2" 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:12.951 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:13.209 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:13.209 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:08:13.209 00:08:13.209 --- 10.0.0.3 ping statistics --- 00:08:13.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.209 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:13.209 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:13.209 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:08:13.209 00:08:13.209 --- 10.0.0.4 ping statistics --- 00:08:13.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.209 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:13.209 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:13.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:13.209 00:08:13.209 --- 10.0.0.1 ping statistics --- 00:08:13.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.209 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:13.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:08:13.210 00:08:13.210 --- 10.0.0.2 ping statistics --- 00:08:13.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.210 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64032 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64032 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64032 ']' 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:13.210 10:30:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:13.469 [2024-11-12 10:30:01.995521] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:08:13.469 [2024-11-12 10:30:01.995618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.469 [2024-11-12 10:30:02.156114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.469 [2024-11-12 10:30:02.195213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.469 [2024-11-12 10:30:02.195269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.469 [2024-11-12 10:30:02.195282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.469 [2024-11-12 10:30:02.195292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.469 [2024-11-12 10:30:02.195300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.469 [2024-11-12 10:30:02.195672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.728 [2024-11-12 10:30:02.228706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:13.728 [2024-11-12 10:30:02.318138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:13.728 Malloc0 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:13.728 [2024-11-12 10:30:02.360535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.728 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64051 00:08:13.729 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:13.729 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:13.729 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64051 /var/tmp/bdevperf.sock 00:08:13.729 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64051 ']' 00:08:13.729 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:13.729 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:13.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:13.729 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:13.729 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:13.729 10:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:13.729 [2024-11-12 10:30:02.423081] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:08:13.729 [2024-11-12 10:30:02.423202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64051 ] 00:08:13.988 [2024-11-12 10:30:02.572822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.988 [2024-11-12 10:30:02.612874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.988 [2024-11-12 10:30:02.647732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.925 10:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:14.925 10:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:14.925 10:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:14.925 10:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.925 10:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.925 NVMe0n1 00:08:14.925 10:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.925 10:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:14.925 Running I/O for 10 seconds... 00:08:17.239 7062.00 IOPS, 27.59 MiB/s [2024-11-12T10:30:06.933Z] 7683.50 IOPS, 30.01 MiB/s [2024-11-12T10:30:07.869Z] 8194.67 IOPS, 32.01 MiB/s [2024-11-12T10:30:08.805Z] 8393.75 IOPS, 32.79 MiB/s [2024-11-12T10:30:09.741Z] 8612.00 IOPS, 33.64 MiB/s [2024-11-12T10:30:10.748Z] 8582.17 IOPS, 33.52 MiB/s [2024-11-12T10:30:11.685Z] 8623.86 IOPS, 33.69 MiB/s [2024-11-12T10:30:13.063Z] 8688.88 IOPS, 33.94 MiB/s [2024-11-12T10:30:14.000Z] 8743.89 IOPS, 34.16 MiB/s [2024-11-12T10:30:14.000Z] 8785.60 IOPS, 34.32 MiB/s 00:08:25.242 Latency(us) 00:08:25.242 [2024-11-12T10:30:14.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.242 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:25.242 Verification LBA range: start 0x0 length 0x4000 00:08:25.242 NVMe0n1 : 10.09 8803.67 34.39 0.00 0.00 115727.51 24903.68 97708.22 00:08:25.242 [2024-11-12T10:30:14.000Z] =================================================================================================================== 00:08:25.242 [2024-11-12T10:30:14.000Z] Total : 8803.67 34.39 0.00 0.00 115727.51 24903.68 97708.22 00:08:25.242 { 00:08:25.242 "results": [ 00:08:25.242 { 00:08:25.242 "job": "NVMe0n1", 00:08:25.242 "core_mask": "0x1", 00:08:25.242 "workload": "verify", 00:08:25.242 "status": "finished", 00:08:25.242 "verify_range": { 00:08:25.242 "start": 0, 00:08:25.242 "length": 16384 00:08:25.242 }, 00:08:25.242 "queue_depth": 1024, 00:08:25.242 "io_size": 4096, 00:08:25.242 "runtime": 10.089539, 00:08:25.242 "iops": 8803.672794168297, 00:08:25.242 "mibps": 34.38934685221991, 00:08:25.242 "io_failed": 0, 00:08:25.242 "io_timeout": 0, 00:08:25.242 "avg_latency_us": 115727.51099434537, 00:08:25.242 "min_latency_us": 24903.68, 00:08:25.242 "max_latency_us": 97708.21818181819 00:08:25.242 } 00:08:25.242 ], 00:08:25.242 "core_count": 1 00:08:25.242 } 00:08:25.242 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64051 00:08:25.242 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64051 ']' 00:08:25.242 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64051 00:08:25.242 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64051 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:25.243 killing process with pid 64051 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64051' 00:08:25.243 Received shutdown signal, test time was about 10.000000 seconds 00:08:25.243 00:08:25.243 Latency(us) 00:08:25.243 [2024-11-12T10:30:14.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.243 [2024-11-12T10:30:14.001Z] =================================================================================================================== 00:08:25.243 [2024-11-12T10:30:14.001Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64051 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64051 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.243 10:30:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.243 rmmod nvme_tcp 00:08:25.243 rmmod nvme_fabrics 00:08:25.243 rmmod nvme_keyring 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64032 ']' 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64032 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64032 ']' 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64032 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64032 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:25.502 killing process with pid 64032 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64032' 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64032 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64032 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:25.502 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:25.761 00:08:25.761 real 0m13.093s 00:08:25.761 user 0m22.930s 00:08:25.761 sys 0m2.063s 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.761 ************************************ 00:08:25.761 END TEST nvmf_queue_depth 00:08:25.761 ************************************ 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.761 ************************************ 00:08:25.761 START TEST nvmf_target_multipath 00:08:25.761 ************************************ 00:08:25.761 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:26.021 * Looking for test storage... 00:08:26.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:26.021 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:26.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.022 --rc genhtml_branch_coverage=1 00:08:26.022 --rc genhtml_function_coverage=1 00:08:26.022 --rc genhtml_legend=1 00:08:26.022 --rc geninfo_all_blocks=1 00:08:26.022 --rc geninfo_unexecuted_blocks=1 00:08:26.022 00:08:26.022 ' 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:26.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.022 --rc genhtml_branch_coverage=1 00:08:26.022 --rc genhtml_function_coverage=1 00:08:26.022 --rc genhtml_legend=1 00:08:26.022 --rc geninfo_all_blocks=1 00:08:26.022 --rc geninfo_unexecuted_blocks=1 00:08:26.022 00:08:26.022 ' 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:26.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.022 --rc genhtml_branch_coverage=1 00:08:26.022 --rc genhtml_function_coverage=1 00:08:26.022 --rc genhtml_legend=1 00:08:26.022 --rc geninfo_all_blocks=1 00:08:26.022 --rc geninfo_unexecuted_blocks=1 00:08:26.022 00:08:26.022 ' 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:26.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.022 --rc genhtml_branch_coverage=1 00:08:26.022 --rc genhtml_function_coverage=1 00:08:26.022 --rc genhtml_legend=1 00:08:26.022 --rc geninfo_all_blocks=1 00:08:26.022 --rc geninfo_unexecuted_blocks=1 00:08:26.022 00:08:26.022 ' 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.022 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.022 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:26.023 Cannot find device "nvmf_init_br" 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:26.023 Cannot find device "nvmf_init_br2" 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:26.023 Cannot find device "nvmf_tgt_br" 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:26.023 Cannot find device "nvmf_tgt_br2" 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:26.023 Cannot find device "nvmf_init_br" 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:26.023 Cannot find device "nvmf_init_br2" 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:26.023 Cannot find device "nvmf_tgt_br" 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:26.023 Cannot find device "nvmf_tgt_br2" 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:26.023 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:26.282 Cannot find device "nvmf_br" 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:26.282 Cannot find device "nvmf_init_if" 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:26.282 Cannot find device "nvmf_init_if2" 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:26.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:26.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:26.282 10:30:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:26.282 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:26.282 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:26.282 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:26.282 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:26.283 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:26.283 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:26.283 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:26.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:26.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:08:26.283 00:08:26.283 --- 10.0.0.3 ping statistics --- 00:08:26.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.283 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:26.283 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:26.283 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:26.283 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:08:26.283 00:08:26.283 --- 10.0.0.4 ping statistics --- 00:08:26.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.283 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:26.283 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:26.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:26.283 00:08:26.283 --- 10.0.0.1 ping statistics --- 00:08:26.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.283 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:26.283 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:26.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:08:26.542 00:08:26.542 --- 10.0.0.2 ping statistics --- 00:08:26.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.542 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64433 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64433 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 64433 ']' 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:26.542 10:30:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:26.542 [2024-11-12 10:30:15.137608] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:08:26.542 [2024-11-12 10:30:15.137700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.542 [2024-11-12 10:30:15.290465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.801 [2024-11-12 10:30:15.331509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.801 [2024-11-12 10:30:15.331582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.801 [2024-11-12 10:30:15.331596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.801 [2024-11-12 10:30:15.331605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.801 [2024-11-12 10:30:15.331614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.801 [2024-11-12 10:30:15.332576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.801 [2024-11-12 10:30:15.332758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.801 [2024-11-12 10:30:15.332829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.801 [2024-11-12 10:30:15.333023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.801 [2024-11-12 10:30:15.365913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.368 10:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.368 10:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:08:27.368 10:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.368 10:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.368 10:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:27.627 10:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.627 10:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:27.885 [2024-11-12 10:30:16.485220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.885 10:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:28.144 Malloc0 00:08:28.144 10:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:28.403 10:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.661 10:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:28.932 [2024-11-12 10:30:17.624901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:28.932 10:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:29.205 [2024-11-12 10:30:17.885173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:29.205 10:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid=96df7a2d-651c-49c0-b1c8-dd965eb48096 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:29.463 10:30:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid=96df7a2d-651c-49c0-b1c8-dd965eb48096 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:29.463 10:30:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:29.463 10:30:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:08:29.463 10:30:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:29.463 10:30:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:08:29.463 10:30:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64528 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:31.997 10:30:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:31.997 [global] 00:08:31.997 thread=1 00:08:31.997 invalidate=1 00:08:31.997 rw=randrw 00:08:31.997 time_based=1 00:08:31.997 runtime=6 00:08:31.997 ioengine=libaio 00:08:31.997 direct=1 00:08:31.997 bs=4096 00:08:31.997 iodepth=128 00:08:31.997 norandommap=0 00:08:31.997 numjobs=1 00:08:31.998 00:08:31.998 verify_dump=1 00:08:31.998 verify_backlog=512 00:08:31.998 verify_state_save=0 00:08:31.998 do_verify=1 00:08:31.998 verify=crc32c-intel 00:08:31.998 [job0] 00:08:31.998 filename=/dev/nvme0n1 00:08:31.998 Could not set queue depth (nvme0n1) 00:08:31.998 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.998 fio-3.35 00:08:31.998 Starting 1 thread 00:08:32.564 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:32.822 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:33.080 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:33.080 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:33.081 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.081 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:33.081 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:33.081 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:33.081 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:33.081 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:33.081 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.081 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:33.081 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:33.081 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:33.081 10:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:33.339 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:33.906 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:33.906 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:33.906 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.906 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:33.906 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:33.906 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:33.906 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:33.906 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:33.906 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.906 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:33.906 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:33.906 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:33.907 10:30:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64528 00:08:38.100 00:08:38.100 job0: (groupid=0, jobs=1): err= 0: pid=64549: Tue Nov 12 10:30:26 2024 00:08:38.100 read: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(237MiB/6007msec) 00:08:38.100 slat (usec): min=7, max=8140, avg=58.99, stdev=230.72 00:08:38.100 clat (usec): min=1622, max=16199, avg=8617.82, stdev=1460.83 00:08:38.100 lat (usec): min=1634, max=16233, avg=8676.82, stdev=1464.95 00:08:38.100 clat percentiles (usec): 00:08:38.100 | 1.00th=[ 4490], 5.00th=[ 6652], 10.00th=[ 7373], 20.00th=[ 7832], 00:08:38.100 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8717], 00:08:38.100 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[12125], 00:08:38.100 | 99.00th=[13173], 99.50th=[13566], 99.90th=[13960], 99.95th=[14222], 00:08:38.100 | 99.99th=[15139] 00:08:38.100 bw ( KiB/s): min= 7576, max=27464, per=51.92%, avg=20941.09, stdev=6746.27, samples=11 00:08:38.100 iops : min= 1894, max= 6866, avg=5235.27, stdev=1686.57, samples=11 00:08:38.100 write: IOPS=5966, BW=23.3MiB/s (24.4MB/s)(125MiB/5349msec); 0 zone resets 00:08:38.100 slat (usec): min=16, max=1922, avg=65.47, stdev=158.72 00:08:38.100 clat (usec): min=1996, max=15039, avg=7518.06, stdev=1319.92 00:08:38.100 lat (usec): min=2022, max=15064, avg=7583.53, stdev=1324.78 00:08:38.100 clat percentiles (usec): 00:08:38.100 | 1.00th=[ 3458], 5.00th=[ 4490], 10.00th=[ 5997], 20.00th=[ 6980], 00:08:38.100 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7898], 00:08:38.100 | 70.00th=[ 8029], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8979], 00:08:38.100 | 99.00th=[11600], 99.50th=[11994], 99.90th=[13566], 99.95th=[13960], 00:08:38.100 | 99.99th=[14877] 00:08:38.100 bw ( KiB/s): min= 7544, max=27200, per=88.08%, avg=21021.82, stdev=6658.10, samples=11 00:08:38.100 iops : min= 1886, max= 6800, avg=5255.45, stdev=1664.52, samples=11 00:08:38.100 lat (msec) : 2=0.01%, 4=1.35%, 10=91.91%, 20=6.73% 00:08:38.100 cpu : usr=5.43%, sys=21.83%, ctx=5343, majf=0, minf=90 00:08:38.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:38.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:38.100 issued rwts: total=60568,31913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:38.100 00:08:38.100 Run status group 0 (all jobs): 00:08:38.100 READ: bw=39.4MiB/s (41.3MB/s), 39.4MiB/s-39.4MiB/s (41.3MB/s-41.3MB/s), io=237MiB (248MB), run=6007-6007msec 00:08:38.100 WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=125MiB (131MB), run=5349-5349msec 00:08:38.100 00:08:38.100 Disk stats (read/write): 00:08:38.100 nvme0n1: ios=59692/31323, merge=0/0, ticks=493691/221077, in_queue=714768, util=98.55% 00:08:38.100 10:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:38.359 10:30:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64628 00:08:38.647 10:30:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:38.647 [global] 00:08:38.647 thread=1 00:08:38.647 invalidate=1 00:08:38.647 rw=randrw 00:08:38.647 time_based=1 00:08:38.647 runtime=6 00:08:38.647 ioengine=libaio 00:08:38.647 direct=1 00:08:38.647 bs=4096 00:08:38.647 iodepth=128 00:08:38.647 norandommap=0 00:08:38.647 numjobs=1 00:08:38.647 00:08:38.647 verify_dump=1 00:08:38.647 verify_backlog=512 00:08:38.647 verify_state_save=0 00:08:38.647 do_verify=1 00:08:38.647 verify=crc32c-intel 00:08:38.647 [job0] 00:08:38.647 filename=/dev/nvme0n1 00:08:38.647 Could not set queue depth (nvme0n1) 00:08:38.647 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:38.647 fio-3.35 00:08:38.647 Starting 1 thread 00:08:39.583 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:39.842 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:40.410 10:30:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:40.669 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:40.928 10:30:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64628 00:08:45.117 00:08:45.117 job0: (groupid=0, jobs=1): err= 0: pid=64656: Tue Nov 12 10:30:33 2024 00:08:45.117 read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(273MiB/6008msec) 00:08:45.117 slat (usec): min=3, max=6476, avg=42.61, stdev=188.27 00:08:45.117 clat (usec): min=328, max=15988, avg=7565.69, stdev=1952.38 00:08:45.117 lat (usec): min=340, max=16022, avg=7608.30, stdev=1966.03 00:08:45.117 clat percentiles (usec): 00:08:45.117 | 1.00th=[ 2704], 5.00th=[ 4015], 10.00th=[ 4883], 20.00th=[ 6063], 00:08:45.117 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 7898], 60.00th=[ 8094], 00:08:45.117 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[11076], 00:08:45.117 | 99.00th=[12780], 99.50th=[13173], 99.90th=[13960], 99.95th=[14091], 00:08:45.117 | 99.99th=[14746] 00:08:45.117 bw ( KiB/s): min= 9344, max=44720, per=53.07%, avg=24674.67, stdev=8999.56, samples=12 00:08:45.117 iops : min= 2336, max=11180, avg=6168.50, stdev=2249.89, samples=12 00:08:45.117 write: IOPS=6949, BW=27.1MiB/s (28.5MB/s)(145MiB/5328msec); 0 zone resets 00:08:45.117 slat (usec): min=12, max=1673, avg=53.64, stdev=132.76 00:08:45.117 clat (usec): min=725, max=15212, avg=6330.75, stdev=1809.29 00:08:45.117 lat (usec): min=751, max=15236, avg=6384.39, stdev=1823.11 00:08:45.117 clat percentiles (usec): 00:08:45.117 | 1.00th=[ 2507], 5.00th=[ 3326], 10.00th=[ 3752], 20.00th=[ 4424], 00:08:45.117 | 30.00th=[ 5145], 40.00th=[ 6259], 50.00th=[ 6915], 60.00th=[ 7242], 00:08:45.117 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8160], 95.00th=[ 8455], 00:08:45.117 | 99.00th=[10945], 99.50th=[11863], 99.90th=[13173], 99.95th=[13566], 00:08:45.117 | 99.99th=[15139] 00:08:45.117 bw ( KiB/s): min= 9832, max=43824, per=88.63%, avg=24638.67, stdev=8764.60, samples=12 00:08:45.117 iops : min= 2458, max=10956, avg=6159.67, stdev=2191.15, samples=12 00:08:45.117 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.06% 00:08:45.117 lat (msec) : 2=0.38%, 4=7.25%, 10=87.59%, 20=4.69% 00:08:45.117 cpu : usr=6.29%, sys=23.19%, ctx=6142, majf=0, minf=114 00:08:45.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:08:45.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:45.117 issued rwts: total=69836,37026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:45.117 00:08:45.117 Run status group 0 (all jobs): 00:08:45.117 READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=273MiB (286MB), run=6008-6008msec 00:08:45.117 WRITE: bw=27.1MiB/s (28.5MB/s), 27.1MiB/s-27.1MiB/s (28.5MB/s-28.5MB/s), io=145MiB (152MB), run=5328-5328msec 00:08:45.117 00:08:45.117 Disk stats (read/write): 00:08:45.117 nvme0n1: ios=68963/36349, merge=0/0, ticks=497152/212506, in_queue=709658, util=98.61% 00:08:45.117 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:45.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:45.117 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:45.117 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:08:45.117 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:08:45.117 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.117 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:08:45.117 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.117 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:08:45.117 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.376 rmmod nvme_tcp 00:08:45.376 rmmod nvme_fabrics 00:08:45.376 rmmod nvme_keyring 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64433 ']' 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64433 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 64433 ']' 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 64433 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:45.376 10:30:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64433 00:08:45.376 killing process with pid 64433 00:08:45.376 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:45.376 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:45.376 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64433' 00:08:45.376 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 64433 00:08:45.376 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 64433 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.637 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.897 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:45.897 ************************************ 00:08:45.897 END TEST nvmf_target_multipath 00:08:45.897 ************************************ 00:08:45.897 00:08:45.897 real 0m19.919s 00:08:45.897 user 1m14.116s 00:08:45.897 sys 0m10.375s 00:08:45.897 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.897 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.898 ************************************ 00:08:45.898 START TEST nvmf_zcopy 00:08:45.898 ************************************ 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:45.898 * Looking for test storage... 00:08:45.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:45.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.898 --rc genhtml_branch_coverage=1 00:08:45.898 --rc genhtml_function_coverage=1 00:08:45.898 --rc genhtml_legend=1 00:08:45.898 --rc geninfo_all_blocks=1 00:08:45.898 --rc geninfo_unexecuted_blocks=1 00:08:45.898 00:08:45.898 ' 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:45.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.898 --rc genhtml_branch_coverage=1 00:08:45.898 --rc genhtml_function_coverage=1 00:08:45.898 --rc genhtml_legend=1 00:08:45.898 --rc geninfo_all_blocks=1 00:08:45.898 --rc geninfo_unexecuted_blocks=1 00:08:45.898 00:08:45.898 ' 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:45.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.898 --rc genhtml_branch_coverage=1 00:08:45.898 --rc genhtml_function_coverage=1 00:08:45.898 --rc genhtml_legend=1 00:08:45.898 --rc geninfo_all_blocks=1 00:08:45.898 --rc geninfo_unexecuted_blocks=1 00:08:45.898 00:08:45.898 ' 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:45.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.898 --rc genhtml_branch_coverage=1 00:08:45.898 --rc genhtml_function_coverage=1 00:08:45.898 --rc genhtml_legend=1 00:08:45.898 --rc geninfo_all_blocks=1 00:08:45.898 --rc geninfo_unexecuted_blocks=1 00:08:45.898 00:08:45.898 ' 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.898 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.158 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:46.158 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:46.159 Cannot find device "nvmf_init_br" 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:46.159 Cannot find device "nvmf_init_br2" 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:46.159 Cannot find device "nvmf_tgt_br" 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:46.159 Cannot find device "nvmf_tgt_br2" 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:46.159 Cannot find device "nvmf_init_br" 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:46.159 Cannot find device "nvmf_init_br2" 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:46.159 Cannot find device "nvmf_tgt_br" 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:46.159 Cannot find device "nvmf_tgt_br2" 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:46.159 Cannot find device "nvmf_br" 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:46.159 Cannot find device "nvmf_init_if" 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:46.159 Cannot find device "nvmf_init_if2" 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:46.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:46.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:46.159 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:46.418 10:30:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:46.418 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:46.418 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:46.418 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:46.419 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:46.419 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:08:46.419 00:08:46.419 --- 10.0.0.3 ping statistics --- 00:08:46.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.419 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:46.419 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:46.419 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:08:46.419 00:08:46.419 --- 10.0.0.4 ping statistics --- 00:08:46.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.419 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:46.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:46.419 00:08:46.419 --- 10.0.0.1 ping statistics --- 00:08:46.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.419 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:46.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:08:46.419 00:08:46.419 --- 10.0.0.2 ping statistics --- 00:08:46.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.419 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=64954 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 64954 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 64954 ']' 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:46.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:46.419 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.419 [2024-11-12 10:30:35.148617] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:08:46.419 [2024-11-12 10:30:35.148729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.679 [2024-11-12 10:30:35.296275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.679 [2024-11-12 10:30:35.328234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.679 [2024-11-12 10:30:35.328282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.679 [2024-11-12 10:30:35.328308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.679 [2024-11-12 10:30:35.328316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.679 [2024-11-12 10:30:35.328322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.679 [2024-11-12 10:30:35.328595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.679 [2024-11-12 10:30:35.357780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.679 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:46.679 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:46.679 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:46.679 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:46.679 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.938 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.938 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:46.938 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:46.938 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.938 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.938 [2024-11-12 10:30:35.463481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.938 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.938 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:46.938 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.938 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.939 [2024-11-12 10:30:35.479578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.939 malloc0 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.939 { 00:08:46.939 "params": { 00:08:46.939 "name": "Nvme$subsystem", 00:08:46.939 "trtype": "$TEST_TRANSPORT", 00:08:46.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.939 "adrfam": "ipv4", 00:08:46.939 "trsvcid": "$NVMF_PORT", 00:08:46.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.939 "hdgst": ${hdgst:-false}, 00:08:46.939 "ddgst": ${ddgst:-false} 00:08:46.939 }, 00:08:46.939 "method": "bdev_nvme_attach_controller" 00:08:46.939 } 00:08:46.939 EOF 00:08:46.939 )") 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:46.939 10:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.939 "params": { 00:08:46.939 "name": "Nvme1", 00:08:46.939 "trtype": "tcp", 00:08:46.939 "traddr": "10.0.0.3", 00:08:46.939 "adrfam": "ipv4", 00:08:46.939 "trsvcid": "4420", 00:08:46.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.939 "hdgst": false, 00:08:46.939 "ddgst": false 00:08:46.939 }, 00:08:46.939 "method": "bdev_nvme_attach_controller" 00:08:46.939 }' 00:08:46.939 [2024-11-12 10:30:35.567891] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:08:46.939 [2024-11-12 10:30:35.567986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64980 ] 00:08:47.198 [2024-11-12 10:30:35.719613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.198 [2024-11-12 10:30:35.761661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.198 [2024-11-12 10:30:35.803528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.198 Running I/O for 10 seconds... 00:08:49.513 6566.00 IOPS, 51.30 MiB/s [2024-11-12T10:30:39.209Z] 6554.00 IOPS, 51.20 MiB/s [2024-11-12T10:30:40.153Z] 6531.33 IOPS, 51.03 MiB/s [2024-11-12T10:30:41.089Z] 6575.00 IOPS, 51.37 MiB/s [2024-11-12T10:30:42.027Z] 6575.60 IOPS, 51.37 MiB/s [2024-11-12T10:30:42.965Z] 6581.17 IOPS, 51.42 MiB/s [2024-11-12T10:30:44.342Z] 6586.86 IOPS, 51.46 MiB/s [2024-11-12T10:30:44.911Z] 6591.88 IOPS, 51.50 MiB/s [2024-11-12T10:30:46.340Z] 6583.00 IOPS, 51.43 MiB/s [2024-11-12T10:30:46.340Z] 6582.70 IOPS, 51.43 MiB/s 00:08:57.582 Latency(us) 00:08:57.582 [2024-11-12T10:30:46.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.582 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:57.582 Verification LBA range: start 0x0 length 0x1000 00:08:57.582 Nvme1n1 : 10.02 6584.48 51.44 0.00 0.00 19379.91 2755.49 26810.18 00:08:57.582 [2024-11-12T10:30:46.340Z] =================================================================================================================== 00:08:57.582 [2024-11-12T10:30:46.340Z] Total : 6584.48 51.44 0.00 0.00 19379.91 2755.49 26810.18 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65097 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.582 { 00:08:57.582 "params": { 00:08:57.582 "name": "Nvme$subsystem", 00:08:57.582 "trtype": "$TEST_TRANSPORT", 00:08:57.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.582 "adrfam": "ipv4", 00:08:57.582 "trsvcid": "$NVMF_PORT", 00:08:57.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.582 "hdgst": ${hdgst:-false}, 00:08:57.582 "ddgst": ${ddgst:-false} 00:08:57.582 }, 00:08:57.582 "method": "bdev_nvme_attach_controller" 00:08:57.582 } 00:08:57.582 EOF 00:08:57.582 )") 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:57.582 [2024-11-12 10:30:46.062910] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.062949] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:57.582 10:30:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.582 "params": { 00:08:57.582 "name": "Nvme1", 00:08:57.582 "trtype": "tcp", 00:08:57.582 "traddr": "10.0.0.3", 00:08:57.582 "adrfam": "ipv4", 00:08:57.582 "trsvcid": "4420", 00:08:57.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.582 "hdgst": false, 00:08:57.582 "ddgst": false 00:08:57.582 }, 00:08:57.582 "method": "bdev_nvme_attach_controller" 00:08:57.582 }' 00:08:57.582 [2024-11-12 10:30:46.074869] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.074894] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.082878] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.082901] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.094871] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.094894] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.105936] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:08:57.582 [2024-11-12 10:30:46.106515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65097 ] 00:08:57.582 [2024-11-12 10:30:46.106880] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.106899] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.114884] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.115040] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.126882] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.126907] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.134873] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.134896] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.142880] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.142903] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.154877] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.154901] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.166882] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.166904] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.178885] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.178907] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.190905] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.190930] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.202907] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.202932] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.214906] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.215074] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.226919] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.226946] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.238919] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.238944] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.246919] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.246943] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.582 [2024-11-12 10:30:46.249324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.582 [2024-11-12 10:30:46.254937] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.582 [2024-11-12 10:30:46.254967] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.583 [2024-11-12 10:30:46.262990] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.583 [2024-11-12 10:30:46.263018] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.583 [2024-11-12 10:30:46.270933] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.583 [2024-11-12 10:30:46.270958] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.583 [2024-11-12 10:30:46.278932] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.583 [2024-11-12 10:30:46.278955] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.583 [2024-11-12 10:30:46.281659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.583 [2024-11-12 10:30:46.286952] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.583 [2024-11-12 10:30:46.286980] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.583 [2024-11-12 10:30:46.294957] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.583 [2024-11-12 10:30:46.294984] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.583 [2024-11-12 10:30:46.302959] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.583 [2024-11-12 10:30:46.303286] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.583 [2024-11-12 10:30:46.310991] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.583 [2024-11-12 10:30:46.311233] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.583 [2024-11-12 10:30:46.318959] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.583 [2024-11-12 10:30:46.319124] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.583 [2024-11-12 10:30:46.319179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.583 [2024-11-12 10:30:46.326984] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.583 [2024-11-12 10:30:46.327221] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.583 [2024-11-12 10:30:46.334961] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.583 [2024-11-12 10:30:46.335128] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.342983] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.343148] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.350993] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.351158] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.358986] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.359205] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.367072] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.367222] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.375044] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.375223] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.383067] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.383259] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.391092] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.391268] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.399060] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.399240] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.407061] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.407259] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.415065] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.415265] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 Running I/O for 5 seconds... 00:08:57.842 [2024-11-12 10:30:46.431491] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.431692] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.441219] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.441399] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.452761] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.452918] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.469926] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.470083] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.487565] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.487724] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.497307] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.497455] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.507156] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.507325] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.517242] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.517286] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.527259] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.527291] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.536799] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.536832] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.546570] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.546731] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.556717] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.556758] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.566546] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.566581] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.576258] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.576289] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.586083] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.586260] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.842 [2024-11-12 10:30:46.597164] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.842 [2024-11-12 10:30:46.597235] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.101 [2024-11-12 10:30:46.609575] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.609751] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.618840] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.618872] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.629455] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.629488] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.640379] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.640413] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.655541] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.655754] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.672075] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.672124] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.683089] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.683278] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.699037] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.699205] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.716534] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.716567] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.725774] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.725804] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.735549] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.735580] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.745329] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.745376] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.755098] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.755287] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.765033] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.765081] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.774604] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.774763] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.784537] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.784568] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.795716] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.795752] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.807573] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.807605] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.819883] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.819915] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.837062] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.837110] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.102 [2024-11-12 10:30:46.854168] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.102 [2024-11-12 10:30:46.854379] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.865653] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.865825] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.876181] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.876244] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.886156] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.886216] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.895426] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.895458] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.905537] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.905568] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.915572] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.915620] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.925283] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.925314] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.937266] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.937457] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.946456] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.946488] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.956961] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.957197] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.968848] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.969014] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.978043] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.978074] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.988583] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.988626] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:46.998251] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:46.998281] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:47.007916] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:47.008077] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.362 [2024-11-12 10:30:47.022646] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.362 [2024-11-12 10:30:47.022818] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.363 [2024-11-12 10:30:47.032115] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.363 [2024-11-12 10:30:47.032333] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.363 [2024-11-12 10:30:47.044156] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.363 [2024-11-12 10:30:47.044348] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.363 [2024-11-12 10:30:47.055816] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.363 [2024-11-12 10:30:47.055973] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.363 [2024-11-12 10:30:47.064491] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.363 [2024-11-12 10:30:47.064691] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.363 [2024-11-12 10:30:47.075431] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.363 [2024-11-12 10:30:47.075577] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.363 [2024-11-12 10:30:47.086795] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.363 [2024-11-12 10:30:47.086952] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.363 [2024-11-12 10:30:47.095658] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.363 [2024-11-12 10:30:47.095816] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.363 [2024-11-12 10:30:47.109960] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.363 [2024-11-12 10:30:47.110137] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.363 [2024-11-12 10:30:47.119593] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.363 [2024-11-12 10:30:47.119857] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.130915] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.131119] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.140914] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.141095] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.150922] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.151080] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.160867] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.161075] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.170774] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.170932] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.181307] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.181448] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.193270] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.193422] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.202389] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.202550] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.216289] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.216433] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.227215] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.227388] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.239606] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.239756] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.249015] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.249214] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.260876] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.261061] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.277200] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.277370] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.286689] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.286848] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.301468] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.301612] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.312712] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.312944] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.329030] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.329262] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.344404] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.344452] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.352617] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.352648] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.363354] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.363384] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-11-12 10:30:47.373209] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-11-12 10:30:47.373252] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.385181] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.385243] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.400962] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.401013] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.410988] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.411173] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 12338.00 IOPS, 96.39 MiB/s [2024-11-12T10:30:47.639Z] [2024-11-12 10:30:47.422124] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.422152] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.431597] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.431629] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.445815] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.445978] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.455147] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.455206] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.466026] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.466058] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.477027] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.477207] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.487983] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.488145] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.501418] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.501466] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.519343] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.519374] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.529538] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.529569] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.540065] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.540096] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.551839] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.551870] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.560448] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.560478] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.572056] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.572087] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.583794] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.583824] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.592069] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.592101] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.604709] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.604745] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.614609] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.614771] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.624810] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.624844] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.881 [2024-11-12 10:30:47.634961] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.881 [2024-11-12 10:30:47.634995] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.645608] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.645656] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.655390] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.655423] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.666488] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.666554] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.679656] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.679688] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.688802] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.688952] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.700688] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.700733] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.710450] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.710495] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.725161] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.725236] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.734871] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.734902] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.745853] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.745885] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.755646] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.755824] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.765704] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.765735] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.775164] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.775359] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.789116] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.789159] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.797988] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.798020] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.808359] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.808390] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.818097] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.818130] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.828004] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.828035] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.842084] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.842116] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.851134] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.851329] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.861788] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.861819] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.873476] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.873538] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.883851] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.883882] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.141 [2024-11-12 10:30:47.896288] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.141 [2024-11-12 10:30:47.896337] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:47.907406] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:47.907472] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:47.923581] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:47.923630] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:47.938877] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:47.939028] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:47.947967] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:47.948141] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:47.963478] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:47.963637] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:47.973463] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:47.973657] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:47.984550] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:47.984745] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:47.996105] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:47.996297] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.004330] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.004497] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.016525] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.016741] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.026088] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.026277] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.036215] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.036371] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.046417] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.046597] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.056757] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.056912] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.070418] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.070548] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.080024] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.080205] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.090662] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.090818] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.100504] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.100722] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.110783] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.110939] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.121014] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.121200] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.130997] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.400 [2024-11-12 10:30:48.131153] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.400 [2024-11-12 10:30:48.141096] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.401 [2024-11-12 10:30:48.141301] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.401 [2024-11-12 10:30:48.151258] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.401 [2024-11-12 10:30:48.151421] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.165688] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.165847] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.174269] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.174433] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.185382] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.185541] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.195598] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.195754] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.205765] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.205921] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.215785] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.215941] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.226111] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.226280] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.235727] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.235884] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.245582] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.245755] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.255575] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.255730] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.265688] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.265844] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.275510] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.275683] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.285876] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.286031] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.295929] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.296086] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.305995] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.306151] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.315880] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.316036] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.325734] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.325895] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.335659] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.335815] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.350280] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.350311] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.361329] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.361360] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.377078] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.377256] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.393732] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.393765] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.404011] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.661 [2024-11-12 10:30:48.404077] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.661 [2024-11-12 10:30:48.416766] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.416915] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 12356.00 IOPS, 96.53 MiB/s [2024-11-12T10:30:48.679Z] [2024-11-12 10:30:48.428333] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.428530] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.439602] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.439762] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.454164] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.454342] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.468573] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.468901] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.477476] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.477754] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.488091] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.488276] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.498085] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.498249] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.507852] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.508010] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.517541] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.517697] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.527541] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.527696] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.537445] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.537635] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.547821] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.547983] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.558936] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.559096] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.572118] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.572308] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.582014] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.582174] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.593558] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.593703] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.603918] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.604076] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.614171] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.614382] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.624715] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.624874] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.635072] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.635255] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.644310] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.644459] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.654675] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.921 [2024-11-12 10:30:48.654831] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.921 [2024-11-12 10:30:48.664579] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.922 [2024-11-12 10:30:48.664759] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.922 [2024-11-12 10:30:48.674289] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.922 [2024-11-12 10:30:48.674466] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.685704] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.685866] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.698306] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.698469] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.707886] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.708042] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.720389] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.720532] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.731363] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.731508] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.739656] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.739811] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.751624] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.751797] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.760792] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.760940] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.770613] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.770768] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.780373] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.780649] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.795087] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.795391] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.811467] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.811656] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.827484] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.827646] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.838645] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.838801] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.855054] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.855222] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.871608] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.871642] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.881061] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.881109] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.891154] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.891214] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.901114] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.901145] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.910841] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.910873] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.920853] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.920885] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.181 [2024-11-12 10:30:48.930554] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.181 [2024-11-12 10:30:48.930585] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:48.941116] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:48.941166] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:48.959616] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:48.959679] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:48.974697] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:48.974730] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:48.990176] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:48.990397] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:48.998986] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:48.999018] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:49.011114] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:49.011146] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:49.021383] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:49.021415] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:49.031674] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:49.031706] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:49.042951] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:49.042982] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:49.051712] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:49.051743] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:49.063421] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:49.063454] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:49.080234] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:49.080265] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:49.091339] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.440 [2024-11-12 10:30:49.091370] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.440 [2024-11-12 10:30:49.099669] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.441 [2024-11-12 10:30:49.099701] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.441 [2024-11-12 10:30:49.111190] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.441 [2024-11-12 10:30:49.111232] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.441 [2024-11-12 10:30:49.120815] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.441 [2024-11-12 10:30:49.120975] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.441 [2024-11-12 10:30:49.134465] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.441 [2024-11-12 10:30:49.134499] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.441 [2024-11-12 10:30:49.144705] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.441 [2024-11-12 10:30:49.144740] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.441 [2024-11-12 10:30:49.154534] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.441 [2024-11-12 10:30:49.154565] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.441 [2024-11-12 10:30:49.164258] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.441 [2024-11-12 10:30:49.164304] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.441 [2024-11-12 10:30:49.178582] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.441 [2024-11-12 10:30:49.178636] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.441 [2024-11-12 10:30:49.187566] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.441 [2024-11-12 10:30:49.187741] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.202887] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.203048] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.212570] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.212748] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.226452] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.226627] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.235745] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.235901] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.250005] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.250158] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.258694] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.258849] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.270863] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.271019] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.289295] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.289453] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.305601] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.305760] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.314743] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.314898] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.324970] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.325195] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.334688] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.334843] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.344826] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.344992] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.354736] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.354893] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.368859] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.369209] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.378053] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.378220] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.388396] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.388536] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.398463] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.398635] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.408614] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.408802] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.418602] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.418633] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 12426.33 IOPS, 97.08 MiB/s [2024-11-12T10:30:49.458Z] [2024-11-12 10:30:49.429089] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.429126] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.440295] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.440359] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.700 [2024-11-12 10:30:49.451981] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.700 [2024-11-12 10:30:49.452192] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.463886] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.464062] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.474937] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.475112] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.487171] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.487231] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.495797] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.495828] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.505723] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.505880] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.515411] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.515552] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.525370] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.525511] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.534908] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.535062] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.544365] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.544507] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.554219] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.554373] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.563873] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.564029] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.573667] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.573823] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.583534] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.583710] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.593456] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.593630] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.603314] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.603455] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.613829] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.614013] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.630151] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.630356] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.645954] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.646113] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.655050] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.655220] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.667840] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.668009] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.684124] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.684291] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.701690] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.701849] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.968 [2024-11-12 10:30:49.712018] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.968 [2024-11-12 10:30:49.712173] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.727102] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.727376] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.743548] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.743708] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.761918] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.762096] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.778378] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.778409] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.789630] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.789661] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.797817] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.797847] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.809273] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.809315] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.825171] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.825485] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.842315] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.842348] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.851782] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.851813] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.861896] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.861927] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.871680] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.871711] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.881135] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.881325] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.891110] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.891141] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.900846] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.901014] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.910601] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.910632] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.920334] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.920365] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.930300] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.930332] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.939837] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.939995] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.949808] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.949840] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.959427] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.959458] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.969146] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.969203] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.228 [2024-11-12 10:30:49.979010] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.228 [2024-11-12 10:30:49.979169] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.487 [2024-11-12 10:30:49.993683] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.487 [2024-11-12 10:30:49.993841] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.487 [2024-11-12 10:30:50.011800] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.487 [2024-11-12 10:30:50.011967] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.487 [2024-11-12 10:30:50.021799] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.487 [2024-11-12 10:30:50.021947] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.487 [2024-11-12 10:30:50.037330] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.487 [2024-11-12 10:30:50.037491] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.053257] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.053619] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.063132] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.063329] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.075662] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.075822] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.086328] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.086460] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.096850] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.097028] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.109442] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.109586] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.118129] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.118317] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.130948] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.131105] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.141984] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.142137] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.158680] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.158912] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.175145] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.175338] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.186378] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.186520] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.194814] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.194970] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.206330] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.206476] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.215751] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.215908] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.226128] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.226165] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.488 [2024-11-12 10:30:50.236020] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.488 [2024-11-12 10:30:50.236052] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.246811] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.246991] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.259885] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.259919] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.269165] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.269219] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.281696] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.281856] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.291337] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.291369] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.301265] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.301446] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.311153] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.311211] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.321133] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.321313] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.330834] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.330867] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.345209] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.345403] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.355538] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.355571] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.368142] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.368317] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.379549] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.379722] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.396222] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.396273] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.413176] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.413243] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 12406.00 IOPS, 96.92 MiB/s [2024-11-12T10:30:50.506Z] [2024-11-12 10:30:50.430987] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.431039] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.440570] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.440603] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.450835] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.450867] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.461774] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.461808] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.473438] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.473472] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.484786] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.484944] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.748 [2024-11-12 10:30:50.495965] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.748 [2024-11-12 10:30:50.496165] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.507736] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.507768] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.522519] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.522715] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.538352] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.538382] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.555661] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.555693] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.566941] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.566972] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.582867] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.582898] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.594408] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.594440] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.602591] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.602622] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.614539] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.614571] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.625371] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.625521] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.633843] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.633874] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.645894] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.645926] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.662657] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.662690] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.677695] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.677730] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.686370] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.686409] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.700060] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.700096] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.711803] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.711835] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.726705] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.726738] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.737451] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.737485] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.752387] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.752421] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.008 [2024-11-12 10:30:50.762627] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.008 [2024-11-12 10:30:50.762659] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.773501] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.773564] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.790849] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.790891] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.807487] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.807678] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.818697] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.818853] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.827017] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.827049] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.841906] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.842174] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.850084] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.850115] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.861701] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.861732] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.870710] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.870742] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.882358] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.882388] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.893695] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.893726] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.901931] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.901962] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.913238] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.913281] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.924470] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.924656] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.933408] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.933440] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.943332] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.943365] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.952639] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.952711] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.962297] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.962327] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.971973] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.972005] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.981636] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.981901] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:50.991513] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:50.991685] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:51.001334] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:51.001474] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:51.010938] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:51.011096] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.268 [2024-11-12 10:30:51.020518] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.268 [2024-11-12 10:30:51.020715] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.031095] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.031263] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.041057] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.041273] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.051362] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.051506] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.061176] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.061365] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.070733] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.070893] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.080017] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.080173] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.089512] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.089705] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.099472] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.099648] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.109986] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.110316] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.123388] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.123608] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.139277] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.139453] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.148949] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.149150] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.161650] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.161808] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.171368] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.171512] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.182773] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.182934] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.193332] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.193393] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.204329] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.204376] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.216278] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.216310] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.225707] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.225739] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.235971] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.236151] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.248234] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.248266] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.528 [2024-11-12 10:30:51.256570] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.528 [2024-11-12 10:30:51.256602] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.529 [2024-11-12 10:30:51.267342] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.529 [2024-11-12 10:30:51.267373] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.529 [2024-11-12 10:30:51.276733] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.529 [2024-11-12 10:30:51.276767] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.287509] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.287573] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.302313] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.302362] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.313252] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.313284] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.329728] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.329760] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.347247] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.347279] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.357644] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.357676] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.371064] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.371097] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.380446] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.380629] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.394224] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.394255] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.402959] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.402991] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.413414] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.413446] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 12426.20 IOPS, 97.08 MiB/s [2024-11-12T10:30:51.547Z] [2024-11-12 10:30:51.423366] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.423397] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.429982] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.430013] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 00:09:02.789 Latency(us) 00:09:02.789 [2024-11-12T10:30:51.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.789 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:02.789 Nvme1n1 : 5.01 12427.90 97.09 0.00 0.00 10287.64 4230.05 19184.17 00:09:02.789 [2024-11-12T10:30:51.547Z] =================================================================================================================== 00:09:02.789 [2024-11-12T10:30:51.547Z] Total : 12427.90 97.09 0.00 0.00 10287.64 4230.05 19184.17 00:09:02.789 [2024-11-12 10:30:51.437981] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.438028] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.445977] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.446005] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.454005] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.454047] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.466031] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.466074] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.478035] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.478097] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.490059] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.490116] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.502060] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.502329] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.514062] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.514353] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.526017] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.526046] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.789 [2024-11-12 10:30:51.534007] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.789 [2024-11-12 10:30:51.534035] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.049 [2024-11-12 10:30:51.550033] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.049 [2024-11-12 10:30:51.550087] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.049 [2024-11-12 10:30:51.558001] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.049 [2024-11-12 10:30:51.558028] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.049 [2024-11-12 10:30:51.566001] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.049 [2024-11-12 10:30:51.566026] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.049 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65097) - No such process 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65097 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.049 delay0 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.049 10:30:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:03.049 [2024-11-12 10:30:51.789675] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:09.617 Initializing NVMe Controllers 00:09:09.617 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:09.617 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:09.617 Initialization complete. Launching workers. 00:09:09.617 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 66 00:09:09.617 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 353, failed to submit 33 00:09:09.617 success 223, unsuccessful 130, failed 0 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.617 rmmod nvme_tcp 00:09:09.617 rmmod nvme_fabrics 00:09:09.617 rmmod nvme_keyring 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 64954 ']' 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 64954 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 64954 ']' 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 64954 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64954 00:09:09.617 killing process with pid 64954 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64954' 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 64954 00:09:09.617 10:30:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 64954 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:09.617 00:09:09.617 real 0m23.910s 00:09:09.617 user 0m39.216s 00:09:09.617 sys 0m6.549s 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:09.617 ************************************ 00:09:09.617 END TEST nvmf_zcopy 00:09:09.617 ************************************ 00:09:09.617 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.877 10:30:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:09.877 10:30:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:09.877 10:30:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.877 10:30:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.877 ************************************ 00:09:09.877 START TEST nvmf_nmic 00:09:09.877 ************************************ 00:09:09.877 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:09.877 * Looking for test storage... 00:09:09.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.877 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:09.877 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:09.877 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:10.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.137 --rc genhtml_branch_coverage=1 00:09:10.137 --rc genhtml_function_coverage=1 00:09:10.137 --rc genhtml_legend=1 00:09:10.137 --rc geninfo_all_blocks=1 00:09:10.137 --rc geninfo_unexecuted_blocks=1 00:09:10.137 00:09:10.137 ' 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:10.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.137 --rc genhtml_branch_coverage=1 00:09:10.137 --rc genhtml_function_coverage=1 00:09:10.137 --rc genhtml_legend=1 00:09:10.137 --rc geninfo_all_blocks=1 00:09:10.137 --rc geninfo_unexecuted_blocks=1 00:09:10.137 00:09:10.137 ' 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:10.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.137 --rc genhtml_branch_coverage=1 00:09:10.137 --rc genhtml_function_coverage=1 00:09:10.137 --rc genhtml_legend=1 00:09:10.137 --rc geninfo_all_blocks=1 00:09:10.137 --rc geninfo_unexecuted_blocks=1 00:09:10.137 00:09:10.137 ' 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:10.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.137 --rc genhtml_branch_coverage=1 00:09:10.137 --rc genhtml_function_coverage=1 00:09:10.137 --rc genhtml_legend=1 00:09:10.137 --rc geninfo_all_blocks=1 00:09:10.137 --rc geninfo_unexecuted_blocks=1 00:09:10.137 00:09:10.137 ' 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.137 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.138 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:10.138 Cannot find device "nvmf_init_br" 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:10.138 Cannot find device "nvmf_init_br2" 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:10.138 Cannot find device "nvmf_tgt_br" 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:10.138 Cannot find device "nvmf_tgt_br2" 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:10.138 Cannot find device "nvmf_init_br" 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:10.138 Cannot find device "nvmf_init_br2" 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:10.138 Cannot find device "nvmf_tgt_br" 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:10.138 Cannot find device "nvmf_tgt_br2" 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:10.138 Cannot find device "nvmf_br" 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:10.138 Cannot find device "nvmf_init_if" 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:10.138 Cannot find device "nvmf_init_if2" 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:10.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:10.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:10.138 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:10.398 10:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:10.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:10.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:10.398 00:09:10.398 --- 10.0.0.3 ping statistics --- 00:09:10.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.398 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:10.398 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:10.398 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:09:10.398 00:09:10.398 --- 10.0.0.4 ping statistics --- 00:09:10.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.398 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:09:10.398 00:09:10.398 --- 10.0.0.1 ping statistics --- 00:09:10.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.398 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:10.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:09:10.398 00:09:10.398 --- 10.0.0.2 ping statistics --- 00:09:10.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.398 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65468 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65468 00:09:10.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 65468 ']' 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:10.398 10:30:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:10.398 [2024-11-12 10:30:59.143453] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:09:10.399 [2024-11-12 10:30:59.143543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.657 [2024-11-12 10:30:59.298004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.657 [2024-11-12 10:30:59.341429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.657 [2024-11-12 10:30:59.341698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.657 [2024-11-12 10:30:59.341876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.657 [2024-11-12 10:30:59.342063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.657 [2024-11-12 10:30:59.342115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.657 [2024-11-12 10:30:59.343162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.657 [2024-11-12 10:30:59.343471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.657 [2024-11-12 10:30:59.343244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.658 [2024-11-12 10:30:59.343371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.658 [2024-11-12 10:30:59.376532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.591 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:11.591 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:11.591 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:11.591 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:11.591 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.591 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.591 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:11.591 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.591 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.592 [2024-11-12 10:31:00.159471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.592 Malloc0 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.592 [2024-11-12 10:31:00.213655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:11.592 test case1: single bdev can't be used in multiple subsystems 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.592 [2024-11-12 10:31:00.237541] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:11.592 [2024-11-12 10:31:00.237578] subsystem.c:2300:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:11.592 [2024-11-12 10:31:00.237605] nvmf_rpc.c:1521:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.592 request: 00:09:11.592 { 00:09:11.592 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:11.592 "namespace": { 00:09:11.592 "bdev_name": "Malloc0", 00:09:11.592 "no_auto_visible": false 00:09:11.592 }, 00:09:11.592 "method": "nvmf_subsystem_add_ns", 00:09:11.592 "req_id": 1 00:09:11.592 } 00:09:11.592 Got JSON-RPC error response 00:09:11.592 response: 00:09:11.592 { 00:09:11.592 "code": -32602, 00:09:11.592 "message": "Invalid parameters" 00:09:11.592 } 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:11.592 Adding namespace failed - expected result. 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:11.592 test case2: host connect to nvmf target in multiple paths 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.592 [2024-11-12 10:31:00.253660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.592 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid=96df7a2d-651c-49c0-b1c8-dd965eb48096 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:11.850 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid=96df7a2d-651c-49c0-b1c8-dd965eb48096 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:11.850 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.851 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:11.851 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.851 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:11.851 10:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:14.379 10:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:14.379 10:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:14.379 10:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.379 10:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:14.379 10:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.379 10:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:14.379 10:31:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:14.379 [global] 00:09:14.379 thread=1 00:09:14.379 invalidate=1 00:09:14.379 rw=write 00:09:14.379 time_based=1 00:09:14.379 runtime=1 00:09:14.379 ioengine=libaio 00:09:14.379 direct=1 00:09:14.379 bs=4096 00:09:14.379 iodepth=1 00:09:14.379 norandommap=0 00:09:14.379 numjobs=1 00:09:14.379 00:09:14.379 verify_dump=1 00:09:14.379 verify_backlog=512 00:09:14.379 verify_state_save=0 00:09:14.379 do_verify=1 00:09:14.379 verify=crc32c-intel 00:09:14.379 [job0] 00:09:14.379 filename=/dev/nvme0n1 00:09:14.379 Could not set queue depth (nvme0n1) 00:09:14.379 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.379 fio-3.35 00:09:14.379 Starting 1 thread 00:09:15.313 00:09:15.313 job0: (groupid=0, jobs=1): err= 0: pid=65559: Tue Nov 12 10:31:03 2024 00:09:15.313 read: IOPS=2989, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1001msec) 00:09:15.313 slat (nsec): min=12638, max=59711, avg=15728.28, stdev=4739.55 00:09:15.313 clat (usec): min=133, max=822, avg=180.96, stdev=31.37 00:09:15.313 lat (usec): min=147, max=837, avg=196.69, stdev=32.05 00:09:15.313 clat percentiles (usec): 00:09:15.313 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 161], 00:09:15.313 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:09:15.313 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 221], 00:09:15.313 | 99.00th=[ 251], 99.50th=[ 306], 99.90th=[ 627], 99.95th=[ 766], 00:09:15.313 | 99.99th=[ 824] 00:09:15.313 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:15.313 slat (usec): min=15, max=160, avg=23.36, stdev= 7.49 00:09:15.313 clat (usec): min=80, max=407, avg=106.48, stdev=17.83 00:09:15.313 lat (usec): min=98, max=435, avg=129.83, stdev=20.52 00:09:15.313 clat percentiles (usec): 00:09:15.313 | 1.00th=[ 83], 5.00th=[ 87], 10.00th=[ 90], 20.00th=[ 93], 00:09:15.313 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 105], 00:09:15.313 | 70.00th=[ 112], 80.00th=[ 119], 90.00th=[ 130], 95.00th=[ 141], 00:09:15.313 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 215], 99.95th=[ 269], 00:09:15.313 | 99.99th=[ 408] 00:09:15.313 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:15.313 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:15.313 lat (usec) : 100=22.21%, 250=77.26%, 500=0.48%, 750=0.02%, 1000=0.03% 00:09:15.313 cpu : usr=2.60%, sys=9.20%, ctx=6064, majf=0, minf=5 00:09:15.314 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.314 issued rwts: total=2992,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.314 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.314 00:09:15.314 Run status group 0 (all jobs): 00:09:15.314 READ: bw=11.7MiB/s (12.2MB/s), 11.7MiB/s-11.7MiB/s (12.2MB/s-12.2MB/s), io=11.7MiB (12.3MB), run=1001-1001msec 00:09:15.314 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:15.314 00:09:15.314 Disk stats (read/write): 00:09:15.314 nvme0n1: ios=2610/2972, merge=0/0, ticks=518/379, in_queue=897, util=91.48% 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.314 10:31:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:15.314 rmmod nvme_tcp 00:09:15.314 rmmod nvme_fabrics 00:09:15.314 rmmod nvme_keyring 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65468 ']' 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65468 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 65468 ']' 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 65468 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65468 00:09:15.314 killing process with pid 65468 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65468' 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 65468 00:09:15.314 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 65468 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:15.573 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:15.832 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:15.832 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:15.832 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:15.832 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:15.832 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:15.832 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.833 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.833 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.833 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:15.833 00:09:15.833 real 0m6.045s 00:09:15.833 user 0m18.489s 00:09:15.833 sys 0m2.295s 00:09:15.833 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:15.833 ************************************ 00:09:15.833 END TEST nvmf_nmic 00:09:15.833 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.833 ************************************ 00:09:15.833 10:31:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:15.833 10:31:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:15.833 10:31:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:15.833 10:31:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.833 ************************************ 00:09:15.833 START TEST nvmf_fio_target 00:09:15.833 ************************************ 00:09:15.833 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:16.094 * Looking for test storage... 00:09:16.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:16.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.094 --rc genhtml_branch_coverage=1 00:09:16.094 --rc genhtml_function_coverage=1 00:09:16.094 --rc genhtml_legend=1 00:09:16.094 --rc geninfo_all_blocks=1 00:09:16.094 --rc geninfo_unexecuted_blocks=1 00:09:16.094 00:09:16.094 ' 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:16.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.094 --rc genhtml_branch_coverage=1 00:09:16.094 --rc genhtml_function_coverage=1 00:09:16.094 --rc genhtml_legend=1 00:09:16.094 --rc geninfo_all_blocks=1 00:09:16.094 --rc geninfo_unexecuted_blocks=1 00:09:16.094 00:09:16.094 ' 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:16.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.094 --rc genhtml_branch_coverage=1 00:09:16.094 --rc genhtml_function_coverage=1 00:09:16.094 --rc genhtml_legend=1 00:09:16.094 --rc geninfo_all_blocks=1 00:09:16.094 --rc geninfo_unexecuted_blocks=1 00:09:16.094 00:09:16.094 ' 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:16.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.094 --rc genhtml_branch_coverage=1 00:09:16.094 --rc genhtml_function_coverage=1 00:09:16.094 --rc genhtml_legend=1 00:09:16.094 --rc geninfo_all_blocks=1 00:09:16.094 --rc geninfo_unexecuted_blocks=1 00:09:16.094 00:09:16.094 ' 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.094 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.095 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:16.095 Cannot find device "nvmf_init_br" 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:16.095 Cannot find device "nvmf_init_br2" 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:16.095 Cannot find device "nvmf_tgt_br" 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.095 Cannot find device "nvmf_tgt_br2" 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:16.095 Cannot find device "nvmf_init_br" 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:16.095 Cannot find device "nvmf_init_br2" 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:16.095 Cannot find device "nvmf_tgt_br" 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:16.095 Cannot find device "nvmf_tgt_br2" 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:16.095 Cannot find device "nvmf_br" 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:16.095 Cannot find device "nvmf_init_if" 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:16.095 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:16.354 Cannot find device "nvmf_init_if2" 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:16.354 10:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:16.354 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:16.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:16.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:09:16.615 00:09:16.615 --- 10.0.0.3 ping statistics --- 00:09:16.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.615 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:16.615 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:16.615 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:09:16.615 00:09:16.615 --- 10.0.0.4 ping statistics --- 00:09:16.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.615 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:16.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:16.615 00:09:16.615 --- 10.0.0.1 ping statistics --- 00:09:16.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.615 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:16.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:09:16.615 00:09:16.615 --- 10.0.0.2 ping statistics --- 00:09:16.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.615 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=65797 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 65797 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 65797 ']' 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:16.615 10:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.615 [2024-11-12 10:31:05.220618] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:09:16.615 [2024-11-12 10:31:05.220750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.873 [2024-11-12 10:31:05.374873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.873 [2024-11-12 10:31:05.415058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.873 [2024-11-12 10:31:05.415128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.873 [2024-11-12 10:31:05.415142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.873 [2024-11-12 10:31:05.415153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.873 [2024-11-12 10:31:05.415162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.873 [2024-11-12 10:31:05.416119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.873 [2024-11-12 10:31:05.416245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.873 [2024-11-12 10:31:05.416373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.873 [2024-11-12 10:31:05.416380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.873 [2024-11-12 10:31:05.449923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:17.806 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:17.806 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:17.806 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.806 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:17.806 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.806 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.806 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:17.806 [2024-11-12 10:31:06.533064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.806 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.372 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:18.372 10:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.372 10:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:18.372 10:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.938 10:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:18.938 10:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.938 10:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:18.938 10:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:19.196 10:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.761 10:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:19.761 10:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.761 10:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:19.761 10:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:20.019 10:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:20.019 10:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:20.581 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:20.838 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:20.838 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:21.095 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:21.095 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.353 10:31:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:21.610 [2024-11-12 10:31:10.186678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:21.610 10:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:21.868 10:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:22.126 10:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid=96df7a2d-651c-49c0-b1c8-dd965eb48096 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:22.126 10:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:22.126 10:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:22.126 10:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:22.126 10:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:22.126 10:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:22.126 10:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:24.648 10:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:24.648 10:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:24.648 10:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.648 10:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:24.648 10:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.648 10:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:24.648 10:31:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:24.648 [global] 00:09:24.648 thread=1 00:09:24.648 invalidate=1 00:09:24.648 rw=write 00:09:24.648 time_based=1 00:09:24.648 runtime=1 00:09:24.648 ioengine=libaio 00:09:24.648 direct=1 00:09:24.648 bs=4096 00:09:24.648 iodepth=1 00:09:24.648 norandommap=0 00:09:24.648 numjobs=1 00:09:24.648 00:09:24.648 verify_dump=1 00:09:24.648 verify_backlog=512 00:09:24.648 verify_state_save=0 00:09:24.648 do_verify=1 00:09:24.648 verify=crc32c-intel 00:09:24.648 [job0] 00:09:24.648 filename=/dev/nvme0n1 00:09:24.648 [job1] 00:09:24.648 filename=/dev/nvme0n2 00:09:24.648 [job2] 00:09:24.648 filename=/dev/nvme0n3 00:09:24.648 [job3] 00:09:24.648 filename=/dev/nvme0n4 00:09:24.648 Could not set queue depth (nvme0n1) 00:09:24.648 Could not set queue depth (nvme0n2) 00:09:24.648 Could not set queue depth (nvme0n3) 00:09:24.648 Could not set queue depth (nvme0n4) 00:09:24.648 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.648 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.648 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.648 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.648 fio-3.35 00:09:24.648 Starting 4 threads 00:09:25.582 00:09:25.582 job0: (groupid=0, jobs=1): err= 0: pid=65981: Tue Nov 12 10:31:14 2024 00:09:25.582 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:25.582 slat (nsec): min=11250, max=31977, avg=13336.60, stdev=1636.48 00:09:25.582 clat (usec): min=128, max=227, avg=161.44, stdev=12.73 00:09:25.582 lat (usec): min=140, max=239, avg=174.78, stdev=12.90 00:09:25.582 clat percentiles (usec): 00:09:25.582 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:09:25.582 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:09:25.582 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:09:25.583 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 215], 99.95th=[ 219], 00:09:25.583 | 99.99th=[ 227] 00:09:25.583 write: IOPS=3227, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec); 0 zone resets 00:09:25.583 slat (nsec): min=14758, max=78892, avg=20961.98, stdev=3003.56 00:09:25.583 clat (usec): min=91, max=583, avg=118.74, stdev=13.75 00:09:25.583 lat (usec): min=110, max=609, avg=139.70, stdev=14.14 00:09:25.583 clat percentiles (usec): 00:09:25.583 | 1.00th=[ 97], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 111], 00:09:25.583 | 30.00th=[ 114], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 121], 00:09:25.583 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 137], 00:09:25.583 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 285], 00:09:25.583 | 99.99th=[ 586] 00:09:25.583 bw ( KiB/s): min=12664, max=12664, per=40.43%, avg=12664.00, stdev= 0.00, samples=1 00:09:25.583 iops : min= 3166, max= 3166, avg=3166.00, stdev= 0.00, samples=1 00:09:25.583 lat (usec) : 100=1.55%, 250=98.41%, 500=0.02%, 750=0.02% 00:09:25.583 cpu : usr=3.00%, sys=8.20%, ctx=6303, majf=0, minf=11 00:09:25.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.583 issued rwts: total=3072,3231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.583 job1: (groupid=0, jobs=1): err= 0: pid=65982: Tue Nov 12 10:31:14 2024 00:09:25.583 read: IOPS=1473, BW=5894KiB/s (6036kB/s)(5900KiB/1001msec) 00:09:25.583 slat (nsec): min=10086, max=34304, avg=12649.56, stdev=2632.00 00:09:25.583 clat (usec): min=279, max=516, avg=354.16, stdev=19.98 00:09:25.583 lat (usec): min=292, max=534, avg=366.81, stdev=20.42 00:09:25.583 clat percentiles (usec): 00:09:25.583 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:09:25.583 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 355], 00:09:25.583 | 70.00th=[ 363], 80.00th=[ 367], 90.00th=[ 371], 95.00th=[ 379], 00:09:25.583 | 99.00th=[ 420], 99.50th=[ 490], 99.90th=[ 515], 99.95th=[ 519], 00:09:25.583 | 99.99th=[ 519] 00:09:25.583 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:25.583 slat (nsec): min=14068, max=92345, avg=26091.96, stdev=6649.28 00:09:25.583 clat (usec): min=105, max=1873, avg=268.66, stdev=63.84 00:09:25.583 lat (usec): min=130, max=1896, avg=294.75, stdev=64.82 00:09:25.583 clat percentiles (usec): 00:09:25.583 | 1.00th=[ 149], 5.00th=[ 180], 10.00th=[ 196], 20.00th=[ 249], 00:09:25.583 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:09:25.583 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 326], 00:09:25.583 | 99.00th=[ 424], 99.50th=[ 437], 99.90th=[ 562], 99.95th=[ 1876], 00:09:25.583 | 99.99th=[ 1876] 00:09:25.583 bw ( KiB/s): min= 8192, max= 8192, per=26.15%, avg=8192.00, stdev= 0.00, samples=1 00:09:25.583 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:25.583 lat (usec) : 250=10.30%, 500=89.41%, 750=0.27% 00:09:25.583 lat (msec) : 2=0.03% 00:09:25.583 cpu : usr=1.30%, sys=5.00%, ctx=3027, majf=0, minf=15 00:09:25.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.583 issued rwts: total=1475,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.583 job2: (groupid=0, jobs=1): err= 0: pid=65983: Tue Nov 12 10:31:14 2024 00:09:25.583 read: IOPS=1484, BW=5938KiB/s (6081kB/s)(5944KiB/1001msec) 00:09:25.583 slat (nsec): min=21189, max=52508, avg=25429.02, stdev=4552.36 00:09:25.583 clat (usec): min=251, max=949, avg=347.27, stdev=47.68 00:09:25.583 lat (usec): min=279, max=972, avg=372.70, stdev=50.14 00:09:25.583 clat percentiles (usec): 00:09:25.583 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 330], 00:09:25.583 | 30.00th=[ 334], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:09:25.583 | 70.00th=[ 347], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 371], 00:09:25.583 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 693], 99.95th=[ 947], 00:09:25.583 | 99.99th=[ 947] 00:09:25.583 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:25.583 slat (nsec): min=21519, max=87284, avg=34689.89, stdev=6449.66 00:09:25.583 clat (usec): min=108, max=687, avg=250.17, stdev=44.68 00:09:25.583 lat (usec): min=137, max=712, avg=284.86, stdev=47.19 00:09:25.583 clat percentiles (usec): 00:09:25.583 | 1.00th=[ 123], 5.00th=[ 143], 10.00th=[ 188], 20.00th=[ 231], 00:09:25.583 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 265], 00:09:25.583 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:09:25.583 | 99.00th=[ 343], 99.50th=[ 371], 99.90th=[ 652], 99.95th=[ 685], 00:09:25.583 | 99.99th=[ 685] 00:09:25.583 bw ( KiB/s): min= 8192, max= 8192, per=26.15%, avg=8192.00, stdev= 0.00, samples=1 00:09:25.583 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:25.583 lat (usec) : 250=16.48%, 500=81.93%, 750=1.56%, 1000=0.03% 00:09:25.583 cpu : usr=2.10%, sys=7.10%, ctx=3022, majf=0, minf=7 00:09:25.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.583 issued rwts: total=1486,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.583 job3: (groupid=0, jobs=1): err= 0: pid=65984: Tue Nov 12 10:31:14 2024 00:09:25.583 read: IOPS=1495, BW=5982KiB/s (6126kB/s)(5988KiB/1001msec) 00:09:25.583 slat (nsec): min=11236, max=67090, avg=19090.46, stdev=3112.03 00:09:25.583 clat (usec): min=146, max=450, avg=345.94, stdev=19.19 00:09:25.583 lat (usec): min=173, max=470, avg=365.03, stdev=19.26 00:09:25.583 clat percentiles (usec): 00:09:25.583 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 334], 00:09:25.583 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:09:25.583 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 363], 95.00th=[ 371], 00:09:25.583 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 449], 99.95th=[ 449], 00:09:25.583 | 99.99th=[ 449] 00:09:25.583 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:25.583 slat (nsec): min=13727, max=87019, avg=26046.58, stdev=7805.70 00:09:25.583 clat (usec): min=113, max=3794, avg=264.55, stdev=127.61 00:09:25.583 lat (usec): min=138, max=3857, avg=290.60, stdev=128.77 00:09:25.583 clat percentiles (usec): 00:09:25.583 | 1.00th=[ 122], 5.00th=[ 131], 10.00th=[ 143], 20.00th=[ 245], 00:09:25.583 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:09:25.583 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 326], 00:09:25.583 | 99.00th=[ 416], 99.50th=[ 449], 99.90th=[ 1844], 99.95th=[ 3785], 00:09:25.583 | 99.99th=[ 3785] 00:09:25.583 bw ( KiB/s): min= 4096, max= 8208, per=19.64%, avg=6152.00, stdev=2907.62, samples=2 00:09:25.583 iops : min= 1024, max= 2052, avg=1538.00, stdev=726.91, samples=2 00:09:25.583 lat (usec) : 250=11.24%, 500=88.53%, 750=0.07% 00:09:25.583 lat (msec) : 2=0.13%, 4=0.03% 00:09:25.583 cpu : usr=1.90%, sys=5.70%, ctx=3039, majf=0, minf=5 00:09:25.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.583 issued rwts: total=1497,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.583 00:09:25.583 Run status group 0 (all jobs): 00:09:25.583 READ: bw=29.4MiB/s (30.8MB/s), 5894KiB/s-12.0MiB/s (6036kB/s-12.6MB/s), io=29.4MiB (30.8MB), run=1001-1001msec 00:09:25.583 WRITE: bw=30.6MiB/s (32.1MB/s), 6138KiB/s-12.6MiB/s (6285kB/s-13.2MB/s), io=30.6MiB (32.1MB), run=1001-1001msec 00:09:25.583 00:09:25.583 Disk stats (read/write): 00:09:25.583 nvme0n1: ios=2610/2878, merge=0/0, ticks=444/372, in_queue=816, util=87.68% 00:09:25.583 nvme0n2: ios=1129/1536, merge=0/0, ticks=379/411, in_queue=790, util=88.15% 00:09:25.583 nvme0n3: ios=1107/1536, merge=0/0, ticks=390/405, in_queue=795, util=89.25% 00:09:25.583 nvme0n4: ios=1113/1536, merge=0/0, ticks=394/390, in_queue=784, util=89.39% 00:09:25.583 10:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:25.583 [global] 00:09:25.583 thread=1 00:09:25.583 invalidate=1 00:09:25.583 rw=randwrite 00:09:25.583 time_based=1 00:09:25.583 runtime=1 00:09:25.583 ioengine=libaio 00:09:25.583 direct=1 00:09:25.583 bs=4096 00:09:25.583 iodepth=1 00:09:25.583 norandommap=0 00:09:25.583 numjobs=1 00:09:25.583 00:09:25.583 verify_dump=1 00:09:25.583 verify_backlog=512 00:09:25.583 verify_state_save=0 00:09:25.583 do_verify=1 00:09:25.583 verify=crc32c-intel 00:09:25.583 [job0] 00:09:25.583 filename=/dev/nvme0n1 00:09:25.583 [job1] 00:09:25.583 filename=/dev/nvme0n2 00:09:25.583 [job2] 00:09:25.583 filename=/dev/nvme0n3 00:09:25.583 [job3] 00:09:25.583 filename=/dev/nvme0n4 00:09:25.583 Could not set queue depth (nvme0n1) 00:09:25.583 Could not set queue depth (nvme0n2) 00:09:25.583 Could not set queue depth (nvme0n3) 00:09:25.583 Could not set queue depth (nvme0n4) 00:09:25.842 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.842 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.842 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.842 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.842 fio-3.35 00:09:25.842 Starting 4 threads 00:09:27.215 00:09:27.215 job0: (groupid=0, jobs=1): err= 0: pid=66048: Tue Nov 12 10:31:15 2024 00:09:27.215 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:27.215 slat (nsec): min=11671, max=35711, avg=14369.75, stdev=2476.05 00:09:27.215 clat (usec): min=143, max=685, avg=256.06, stdev=29.91 00:09:27.215 lat (usec): min=158, max=700, avg=270.43, stdev=30.20 00:09:27.215 clat percentiles (usec): 00:09:27.215 | 1.00th=[ 180], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:09:27.215 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 258], 00:09:27.215 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:09:27.215 | 99.00th=[ 367], 99.50th=[ 441], 99.90th=[ 570], 99.95th=[ 644], 00:09:27.215 | 99.99th=[ 685] 00:09:27.215 write: IOPS=2061, BW=8248KiB/s (8446kB/s)(8256KiB/1001msec); 0 zone resets 00:09:27.215 slat (usec): min=17, max=118, avg=22.42, stdev= 4.88 00:09:27.215 clat (usec): min=100, max=1792, avg=189.84, stdev=46.35 00:09:27.215 lat (usec): min=121, max=1816, avg=212.25, stdev=47.07 00:09:27.215 clat percentiles (usec): 00:09:27.215 | 1.00th=[ 125], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:09:27.215 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:09:27.215 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 219], 00:09:27.215 | 99.00th=[ 330], 99.50th=[ 363], 99.90th=[ 437], 99.95th=[ 486], 00:09:27.215 | 99.99th=[ 1795] 00:09:27.215 bw ( KiB/s): min= 8232, max= 8232, per=20.06%, avg=8232.00, stdev= 0.00, samples=1 00:09:27.215 iops : min= 2058, max= 2058, avg=2058.00, stdev= 0.00, samples=1 00:09:27.215 lat (usec) : 250=68.29%, 500=31.59%, 750=0.10% 00:09:27.215 lat (msec) : 2=0.02% 00:09:27.215 cpu : usr=0.90%, sys=6.90%, ctx=4112, majf=0, minf=15 00:09:27.215 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.215 issued rwts: total=2048,2064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.215 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.215 job1: (groupid=0, jobs=1): err= 0: pid=66049: Tue Nov 12 10:31:15 2024 00:09:27.215 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:27.216 slat (nsec): min=12290, max=50479, avg=14997.14, stdev=3480.28 00:09:27.216 clat (usec): min=162, max=2180, avg=258.37, stdev=56.87 00:09:27.216 lat (usec): min=180, max=2207, avg=273.37, stdev=57.12 00:09:27.216 clat percentiles (usec): 00:09:27.216 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 241], 00:09:27.216 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:09:27.216 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:09:27.216 | 99.00th=[ 416], 99.50th=[ 478], 99.90th=[ 848], 99.95th=[ 1188], 00:09:27.216 | 99.99th=[ 2180] 00:09:27.216 write: IOPS=2087, BW=8352KiB/s (8552kB/s)(8360KiB/1001msec); 0 zone resets 00:09:27.216 slat (usec): min=16, max=108, avg=25.33, stdev= 6.74 00:09:27.216 clat (usec): min=94, max=450, avg=181.26, stdev=25.02 00:09:27.216 lat (usec): min=115, max=469, avg=206.59, stdev=25.14 00:09:27.216 clat percentiles (usec): 00:09:27.216 | 1.00th=[ 108], 5.00th=[ 151], 10.00th=[ 161], 20.00th=[ 169], 00:09:27.216 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:09:27.216 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 208], 00:09:27.216 | 99.00th=[ 231], 99.50th=[ 289], 99.90th=[ 408], 99.95th=[ 420], 00:09:27.216 | 99.99th=[ 449] 00:09:27.216 bw ( KiB/s): min= 8376, max= 8376, per=20.42%, avg=8376.00, stdev= 0.00, samples=1 00:09:27.216 iops : min= 2094, max= 2094, avg=2094.00, stdev= 0.00, samples=1 00:09:27.216 lat (usec) : 100=0.12%, 250=69.99%, 500=29.72%, 750=0.10%, 1000=0.02% 00:09:27.216 lat (msec) : 2=0.02%, 4=0.02% 00:09:27.216 cpu : usr=1.50%, sys=6.90%, ctx=4138, majf=0, minf=12 00:09:27.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.216 issued rwts: total=2048,2090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.216 job2: (groupid=0, jobs=1): err= 0: pid=66050: Tue Nov 12 10:31:15 2024 00:09:27.216 read: IOPS=2703, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:09:27.216 slat (nsec): min=10775, max=31479, avg=12785.69, stdev=1641.71 00:09:27.216 clat (usec): min=142, max=3609, avg=177.13, stdev=90.84 00:09:27.216 lat (usec): min=154, max=3632, avg=189.92, stdev=91.19 00:09:27.216 clat percentiles (usec): 00:09:27.216 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:09:27.216 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:09:27.216 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 196], 00:09:27.216 | 99.00th=[ 221], 99.50th=[ 363], 99.90th=[ 1631], 99.95th=[ 2737], 00:09:27.216 | 99.99th=[ 3621] 00:09:27.216 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:27.216 slat (usec): min=15, max=110, avg=19.95, stdev= 3.48 00:09:27.216 clat (usec): min=103, max=1780, avg=134.88, stdev=31.86 00:09:27.216 lat (usec): min=122, max=1800, avg=154.83, stdev=32.12 00:09:27.216 clat percentiles (usec): 00:09:27.216 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 126], 00:09:27.216 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:09:27.216 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 153], 00:09:27.216 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 188], 99.95th=[ 408], 00:09:27.216 | 99.99th=[ 1778] 00:09:27.216 bw ( KiB/s): min=12288, max=12288, per=29.95%, avg=12288.00, stdev= 0.00, samples=1 00:09:27.216 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:27.216 lat (usec) : 250=99.62%, 500=0.21%, 750=0.10% 00:09:27.216 lat (msec) : 2=0.03%, 4=0.03% 00:09:27.216 cpu : usr=1.80%, sys=8.00%, ctx=5782, majf=0, minf=14 00:09:27.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.216 issued rwts: total=2706,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.216 job3: (groupid=0, jobs=1): err= 0: pid=66051: Tue Nov 12 10:31:15 2024 00:09:27.216 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:27.216 slat (nsec): min=11750, max=34425, avg=14683.47, stdev=2131.90 00:09:27.216 clat (usec): min=147, max=3945, avg=180.19, stdev=101.34 00:09:27.216 lat (usec): min=161, max=3964, avg=194.88, stdev=101.56 00:09:27.216 clat percentiles (usec): 00:09:27.216 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:09:27.216 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:09:27.216 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:09:27.216 | 99.00th=[ 212], 99.50th=[ 217], 99.90th=[ 1188], 99.95th=[ 3458], 00:09:27.216 | 99.99th=[ 3949] 00:09:27.216 write: IOPS=3037, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec); 0 zone resets 00:09:27.216 slat (usec): min=15, max=392, avg=22.03, stdev= 7.66 00:09:27.216 clat (usec): min=15, max=5803, avg=139.09, stdev=118.57 00:09:27.216 lat (usec): min=123, max=5824, avg=161.12, stdev=118.82 00:09:27.216 clat percentiles (usec): 00:09:27.216 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 127], 00:09:27.216 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:09:27.216 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:09:27.216 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 701], 99.95th=[ 3064], 00:09:27.216 | 99.99th=[ 5800] 00:09:27.216 bw ( KiB/s): min=12288, max=12288, per=29.95%, avg=12288.00, stdev= 0.00, samples=1 00:09:27.216 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:27.216 lat (usec) : 20=0.02%, 250=99.80%, 500=0.05%, 750=0.02% 00:09:27.216 lat (msec) : 2=0.04%, 4=0.05%, 10=0.02% 00:09:27.216 cpu : usr=2.60%, sys=7.90%, ctx=5603, majf=0, minf=8 00:09:27.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.216 issued rwts: total=2560,3041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.216 00:09:27.216 Run status group 0 (all jobs): 00:09:27.216 READ: bw=36.5MiB/s (38.3MB/s), 8184KiB/s-10.6MiB/s (8380kB/s-11.1MB/s), io=36.6MiB (38.3MB), run=1001-1001msec 00:09:27.216 WRITE: bw=40.1MiB/s (42.0MB/s), 8248KiB/s-12.0MiB/s (8446kB/s-12.6MB/s), io=40.1MiB (42.1MB), run=1001-1001msec 00:09:27.216 00:09:27.216 Disk stats (read/write): 00:09:27.216 nvme0n1: ios=1615/2048, merge=0/0, ticks=420/408, in_queue=828, util=87.68% 00:09:27.216 nvme0n2: ios=1621/2048, merge=0/0, ticks=437/396, in_queue=833, util=87.98% 00:09:27.216 nvme0n3: ios=2396/2560, merge=0/0, ticks=421/362, in_queue=783, util=88.95% 00:09:27.216 nvme0n4: ios=2241/2560, merge=0/0, ticks=408/370, in_queue=778, util=89.09% 00:09:27.216 10:31:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:27.216 [global] 00:09:27.216 thread=1 00:09:27.216 invalidate=1 00:09:27.216 rw=write 00:09:27.216 time_based=1 00:09:27.216 runtime=1 00:09:27.216 ioengine=libaio 00:09:27.216 direct=1 00:09:27.216 bs=4096 00:09:27.216 iodepth=128 00:09:27.216 norandommap=0 00:09:27.216 numjobs=1 00:09:27.216 00:09:27.216 verify_dump=1 00:09:27.216 verify_backlog=512 00:09:27.216 verify_state_save=0 00:09:27.216 do_verify=1 00:09:27.216 verify=crc32c-intel 00:09:27.216 [job0] 00:09:27.216 filename=/dev/nvme0n1 00:09:27.216 [job1] 00:09:27.216 filename=/dev/nvme0n2 00:09:27.216 [job2] 00:09:27.216 filename=/dev/nvme0n3 00:09:27.216 [job3] 00:09:27.216 filename=/dev/nvme0n4 00:09:27.216 Could not set queue depth (nvme0n1) 00:09:27.216 Could not set queue depth (nvme0n2) 00:09:27.216 Could not set queue depth (nvme0n3) 00:09:27.216 Could not set queue depth (nvme0n4) 00:09:27.216 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.216 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.216 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.216 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.216 fio-3.35 00:09:27.216 Starting 4 threads 00:09:28.590 00:09:28.590 job0: (groupid=0, jobs=1): err= 0: pid=66106: Tue Nov 12 10:31:16 2024 00:09:28.590 read: IOPS=2683, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1002msec) 00:09:28.590 slat (usec): min=5, max=8095, avg=181.44, stdev=933.37 00:09:28.590 clat (usec): min=1166, max=28698, avg=22053.14, stdev=3041.13 00:09:28.590 lat (usec): min=6308, max=28712, avg=22234.58, stdev=2929.12 00:09:28.590 clat percentiles (usec): 00:09:28.590 | 1.00th=[ 6718], 5.00th=[16909], 10.00th=[18220], 20.00th=[21103], 00:09:28.590 | 30.00th=[22152], 40.00th=[22414], 50.00th=[22676], 60.00th=[22676], 00:09:28.590 | 70.00th=[22938], 80.00th=[23200], 90.00th=[23725], 95.00th=[25822], 00:09:28.590 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:09:28.590 | 99.99th=[28705] 00:09:28.590 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:09:28.590 slat (usec): min=11, max=5702, avg=158.49, stdev=751.83 00:09:28.590 clat (usec): min=11901, max=27949, avg=21748.81, stdev=2207.29 00:09:28.590 lat (usec): min=14870, max=27974, avg=21907.30, stdev=2057.49 00:09:28.590 clat percentiles (usec): 00:09:28.590 | 1.00th=[16450], 5.00th=[17171], 10.00th=[19268], 20.00th=[21103], 00:09:28.590 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21627], 60.00th=[21627], 00:09:28.590 | 70.00th=[22152], 80.00th=[22676], 90.00th=[24249], 95.00th=[27132], 00:09:28.590 | 99.00th=[27657], 99.50th=[27657], 99.90th=[27919], 99.95th=[27919], 00:09:28.590 | 99.99th=[27919] 00:09:28.590 bw ( KiB/s): min=12288, max=12312, per=18.81%, avg=12300.00, stdev=16.97, samples=2 00:09:28.590 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:28.590 lat (msec) : 2=0.02%, 10=0.56%, 20=13.19%, 50=86.24% 00:09:28.590 cpu : usr=3.00%, sys=8.59%, ctx=181, majf=0, minf=7 00:09:28.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:28.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.591 issued rwts: total=2689,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.591 job1: (groupid=0, jobs=1): err= 0: pid=66107: Tue Nov 12 10:31:16 2024 00:09:28.591 read: IOPS=2651, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1002msec) 00:09:28.591 slat (usec): min=5, max=7317, avg=174.78, stdev=881.00 00:09:28.591 clat (usec): min=357, max=32477, avg=23050.15, stdev=3598.73 00:09:28.591 lat (usec): min=3228, max=32499, avg=23224.93, stdev=3489.21 00:09:28.591 clat percentiles (usec): 00:09:28.591 | 1.00th=[ 3752], 5.00th=[17695], 10.00th=[22152], 20.00th=[22414], 00:09:28.591 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:09:28.591 | 70.00th=[23462], 80.00th=[24773], 90.00th=[26346], 95.00th=[27657], 00:09:28.591 | 99.00th=[32375], 99.50th=[32375], 99.90th=[32375], 99.95th=[32375], 00:09:28.591 | 99.99th=[32375] 00:09:28.591 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:09:28.591 slat (usec): min=12, max=10355, avg=166.01, stdev=792.44 00:09:28.591 clat (usec): min=13532, max=23599, avg=21117.08, stdev=1354.76 00:09:28.591 lat (usec): min=16959, max=27702, avg=21283.09, stdev=1148.88 00:09:28.591 clat percentiles (usec): 00:09:28.591 | 1.00th=[16909], 5.00th=[18482], 10.00th=[18744], 20.00th=[20579], 00:09:28.591 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21365], 60.00th=[21627], 00:09:28.591 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22414], 95.00th=[22676], 00:09:28.591 | 99.00th=[23200], 99.50th=[23200], 99.90th=[23462], 99.95th=[23462], 00:09:28.591 | 99.99th=[23725] 00:09:28.591 bw ( KiB/s): min=12040, max=12312, per=18.62%, avg=12176.00, stdev=192.33, samples=2 00:09:28.591 iops : min= 3010, max= 3078, avg=3044.00, stdev=48.08, samples=2 00:09:28.591 lat (usec) : 500=0.02% 00:09:28.591 lat (msec) : 4=0.56%, 10=0.56%, 20=12.13%, 50=86.73% 00:09:28.591 cpu : usr=2.60%, sys=9.89%, ctx=180, majf=0, minf=10 00:09:28.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:28.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.591 issued rwts: total=2657,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.591 job2: (groupid=0, jobs=1): err= 0: pid=66108: Tue Nov 12 10:31:16 2024 00:09:28.591 read: IOPS=4719, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1002msec) 00:09:28.591 slat (usec): min=4, max=3923, avg=99.09, stdev=391.62 00:09:28.591 clat (usec): min=1513, max=17423, avg=13110.44, stdev=1409.47 00:09:28.591 lat (usec): min=1524, max=17527, avg=13209.53, stdev=1444.18 00:09:28.591 clat percentiles (usec): 00:09:28.591 | 1.00th=[ 5407], 5.00th=[11469], 10.00th=[12256], 20.00th=[12649], 00:09:28.591 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:09:28.591 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14484], 95.00th=[15008], 00:09:28.591 | 99.00th=[15795], 99.50th=[16319], 99.90th=[16712], 99.95th=[17171], 00:09:28.591 | 99.99th=[17433] 00:09:28.591 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:28.591 slat (usec): min=12, max=3508, avg=95.35, stdev=429.49 00:09:28.591 clat (usec): min=9612, max=17407, avg=12615.54, stdev=976.94 00:09:28.591 lat (usec): min=9644, max=17425, avg=12710.89, stdev=1056.86 00:09:28.591 clat percentiles (usec): 00:09:28.591 | 1.00th=[10552], 5.00th=[11731], 10.00th=[11863], 20.00th=[11994], 00:09:28.591 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:09:28.591 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13960], 95.00th=[14746], 00:09:28.591 | 99.00th=[16057], 99.50th=[16581], 99.90th=[17171], 99.95th=[17171], 00:09:28.591 | 99.99th=[17433] 00:09:28.591 bw ( KiB/s): min=20432, max=20480, per=31.28%, avg=20456.00, stdev=33.94, samples=2 00:09:28.591 iops : min= 5108, max= 5120, avg=5114.00, stdev= 8.49, samples=2 00:09:28.591 lat (msec) : 2=0.12%, 4=0.25%, 10=0.55%, 20=99.08% 00:09:28.591 cpu : usr=4.50%, sys=15.08%, ctx=399, majf=0, minf=5 00:09:28.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:28.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.591 issued rwts: total=4729,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.591 job3: (groupid=0, jobs=1): err= 0: pid=66109: Tue Nov 12 10:31:16 2024 00:09:28.591 read: IOPS=4695, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1002msec) 00:09:28.591 slat (usec): min=4, max=5897, avg=99.89, stdev=475.93 00:09:28.591 clat (usec): min=264, max=16746, avg=13193.81, stdev=1290.10 00:09:28.591 lat (usec): min=3099, max=16758, avg=13293.70, stdev=1201.97 00:09:28.591 clat percentiles (usec): 00:09:28.591 | 1.00th=[ 6718], 5.00th=[11338], 10.00th=[12780], 20.00th=[13042], 00:09:28.591 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13304], 60.00th=[13304], 00:09:28.591 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13829], 95.00th=[14091], 00:09:28.591 | 99.00th=[16712], 99.50th=[16712], 99.90th=[16712], 99.95th=[16712], 00:09:28.591 | 99.99th=[16712] 00:09:28.591 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:28.591 slat (usec): min=12, max=2974, avg=95.76, stdev=407.54 00:09:28.591 clat (usec): min=9285, max=13522, avg=12589.03, stdev=544.28 00:09:28.591 lat (usec): min=10754, max=13549, avg=12684.79, stdev=361.28 00:09:28.591 clat percentiles (usec): 00:09:28.591 | 1.00th=[10159], 5.00th=[11863], 10.00th=[12125], 20.00th=[12387], 00:09:28.591 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:09:28.591 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13173], 95.00th=[13304], 00:09:28.591 | 99.00th=[13435], 99.50th=[13435], 99.90th=[13566], 99.95th=[13566], 00:09:28.591 | 99.99th=[13566] 00:09:28.591 bw ( KiB/s): min=20232, max=20521, per=31.15%, avg=20376.50, stdev=204.35, samples=2 00:09:28.591 iops : min= 5058, max= 5130, avg=5094.00, stdev=50.91, samples=2 00:09:28.591 lat (usec) : 500=0.01% 00:09:28.591 lat (msec) : 4=0.33%, 10=0.68%, 20=98.98% 00:09:28.591 cpu : usr=3.90%, sys=14.69%, ctx=321, majf=0, minf=3 00:09:28.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:28.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.591 issued rwts: total=4705,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.591 00:09:28.591 Run status group 0 (all jobs): 00:09:28.591 READ: bw=57.6MiB/s (60.4MB/s), 10.4MiB/s-18.4MiB/s (10.9MB/s-19.3MB/s), io=57.7MiB (60.5MB), run=1002-1002msec 00:09:28.591 WRITE: bw=63.9MiB/s (67.0MB/s), 12.0MiB/s-20.0MiB/s (12.6MB/s-20.9MB/s), io=64.0MiB (67.1MB), run=1002-1002msec 00:09:28.591 00:09:28.591 Disk stats (read/write): 00:09:28.591 nvme0n1: ios=2450/2560, merge=0/0, ticks=13180/12091, in_queue=25271, util=88.08% 00:09:28.591 nvme0n2: ios=2385/2560, merge=0/0, ticks=12776/12126, in_queue=24902, util=88.29% 00:09:28.591 nvme0n3: ios=4096/4396, merge=0/0, ticks=17167/15514, in_queue=32681, util=89.28% 00:09:28.591 nvme0n4: ios=4102/4352, merge=0/0, ticks=12267/11723, in_queue=23990, util=89.84% 00:09:28.591 10:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:28.591 [global] 00:09:28.591 thread=1 00:09:28.591 invalidate=1 00:09:28.591 rw=randwrite 00:09:28.591 time_based=1 00:09:28.591 runtime=1 00:09:28.591 ioengine=libaio 00:09:28.591 direct=1 00:09:28.591 bs=4096 00:09:28.591 iodepth=128 00:09:28.591 norandommap=0 00:09:28.591 numjobs=1 00:09:28.591 00:09:28.591 verify_dump=1 00:09:28.591 verify_backlog=512 00:09:28.591 verify_state_save=0 00:09:28.591 do_verify=1 00:09:28.591 verify=crc32c-intel 00:09:28.591 [job0] 00:09:28.591 filename=/dev/nvme0n1 00:09:28.591 [job1] 00:09:28.591 filename=/dev/nvme0n2 00:09:28.591 [job2] 00:09:28.591 filename=/dev/nvme0n3 00:09:28.591 [job3] 00:09:28.591 filename=/dev/nvme0n4 00:09:28.591 Could not set queue depth (nvme0n1) 00:09:28.591 Could not set queue depth (nvme0n2) 00:09:28.591 Could not set queue depth (nvme0n3) 00:09:28.591 Could not set queue depth (nvme0n4) 00:09:28.591 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.591 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.591 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.591 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.591 fio-3.35 00:09:28.591 Starting 4 threads 00:09:29.965 00:09:29.965 job0: (groupid=0, jobs=1): err= 0: pid=66162: Tue Nov 12 10:31:18 2024 00:09:29.965 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:09:29.965 slat (usec): min=5, max=5211, avg=81.38, stdev=478.96 00:09:29.965 clat (usec): min=6975, max=18088, avg=11535.55, stdev=1260.48 00:09:29.965 lat (usec): min=6990, max=21417, avg=11616.93, stdev=1280.67 00:09:29.965 clat percentiles (usec): 00:09:29.965 | 1.00th=[ 7635], 5.00th=[10159], 10.00th=[10683], 20.00th=[11076], 00:09:29.965 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:09:29.965 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12518], 00:09:29.965 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:09:29.965 | 99.99th=[18220] 00:09:29.965 write: IOPS=5989, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1004msec); 0 zone resets 00:09:29.965 slat (usec): min=6, max=8082, avg=83.02, stdev=469.58 00:09:29.965 clat (usec): min=504, max=15789, avg=10359.88, stdev=1281.65 00:09:29.965 lat (usec): min=4480, max=15840, avg=10442.89, stdev=1215.91 00:09:29.965 clat percentiles (usec): 00:09:29.965 | 1.00th=[ 5866], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9503], 00:09:29.965 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10552], 60.00th=[10683], 00:09:29.965 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:09:29.965 | 99.00th=[15401], 99.50th=[15533], 99.90th=[15664], 99.95th=[15664], 00:09:29.965 | 99.99th=[15795] 00:09:29.965 bw ( KiB/s): min=22504, max=24625, per=34.56%, avg=23564.50, stdev=1499.77, samples=2 00:09:29.965 iops : min= 5626, max= 6156, avg=5891.00, stdev=374.77, samples=2 00:09:29.965 lat (usec) : 750=0.01% 00:09:29.965 lat (msec) : 10=19.85%, 20=80.14% 00:09:29.965 cpu : usr=4.89%, sys=16.05%, ctx=257, majf=0, minf=15 00:09:29.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:29.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.965 issued rwts: total=5632,6013,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.965 job1: (groupid=0, jobs=1): err= 0: pid=66163: Tue Nov 12 10:31:18 2024 00:09:29.965 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:09:29.965 slat (usec): min=12, max=11711, avg=174.37, stdev=1167.70 00:09:29.965 clat (usec): min=13745, max=37424, avg=23999.73, stdev=2659.45 00:09:29.965 lat (usec): min=13760, max=45446, avg=24174.10, stdev=2713.52 00:09:29.965 clat percentiles (usec): 00:09:29.965 | 1.00th=[14353], 5.00th=[21890], 10.00th=[22676], 20.00th=[23200], 00:09:29.965 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:09:29.965 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25297], 95.00th=[25822], 00:09:29.965 | 99.00th=[36963], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:09:29.965 | 99.99th=[37487] 00:09:29.965 write: IOPS=2989, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1006msec); 0 zone resets 00:09:29.965 slat (usec): min=16, max=18233, avg=177.03, stdev=1160.17 00:09:29.965 clat (usec): min=729, max=32063, avg=21972.12, stdev=3000.05 00:09:29.965 lat (usec): min=8159, max=32095, avg=22149.14, stdev=2820.35 00:09:29.965 clat percentiles (usec): 00:09:29.965 | 1.00th=[ 9765], 5.00th=[19268], 10.00th=[20317], 20.00th=[20841], 00:09:29.965 | 30.00th=[21365], 40.00th=[21627], 50.00th=[22152], 60.00th=[22414], 00:09:29.965 | 70.00th=[22938], 80.00th=[23462], 90.00th=[23987], 95.00th=[24511], 00:09:29.965 | 99.00th=[31851], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 00:09:29.965 | 99.99th=[32113] 00:09:29.965 bw ( KiB/s): min=10744, max=12288, per=16.89%, avg=11516.00, stdev=1091.77, samples=2 00:09:29.965 iops : min= 2686, max= 3072, avg=2879.00, stdev=272.94, samples=2 00:09:29.965 lat (usec) : 750=0.02% 00:09:29.965 lat (msec) : 10=0.57%, 20=4.72%, 50=94.68% 00:09:29.965 cpu : usr=2.89%, sys=9.15%, ctx=117, majf=0, minf=13 00:09:29.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:29.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.965 issued rwts: total=2560,3007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.965 job2: (groupid=0, jobs=1): err= 0: pid=66164: Tue Nov 12 10:31:18 2024 00:09:29.965 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:09:29.965 slat (usec): min=7, max=11903, avg=174.66, stdev=1176.99 00:09:29.965 clat (usec): min=13827, max=37561, avg=23992.62, stdev=2675.86 00:09:29.965 lat (usec): min=13853, max=45424, avg=24167.28, stdev=2727.53 00:09:29.965 clat percentiles (usec): 00:09:29.965 | 1.00th=[14222], 5.00th=[21890], 10.00th=[22676], 20.00th=[23200], 00:09:29.965 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:09:29.965 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25822], 00:09:29.965 | 99.00th=[36963], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:09:29.965 | 99.99th=[37487] 00:09:29.965 write: IOPS=2989, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1006msec); 0 zone resets 00:09:29.965 slat (usec): min=7, max=18909, avg=177.38, stdev=1178.98 00:09:29.965 clat (usec): min=743, max=32812, avg=21990.65, stdev=2946.75 00:09:29.965 lat (usec): min=9795, max=32837, avg=22168.03, stdev=2757.17 00:09:29.965 clat percentiles (usec): 00:09:29.965 | 1.00th=[10814], 5.00th=[19530], 10.00th=[20317], 20.00th=[20841], 00:09:29.965 | 30.00th=[21365], 40.00th=[21627], 50.00th=[21890], 60.00th=[22414], 00:09:29.965 | 70.00th=[22938], 80.00th=[23462], 90.00th=[23987], 95.00th=[24249], 00:09:29.965 | 99.00th=[32375], 99.50th=[32637], 99.90th=[32900], 99.95th=[32900], 00:09:29.965 | 99.99th=[32900] 00:09:29.965 bw ( KiB/s): min=10744, max=12288, per=16.89%, avg=11516.00, stdev=1091.77, samples=2 00:09:29.965 iops : min= 2686, max= 3072, avg=2879.00, stdev=272.94, samples=2 00:09:29.965 lat (usec) : 750=0.02% 00:09:29.965 lat (msec) : 10=0.11%, 20=5.19%, 50=94.68% 00:09:29.965 cpu : usr=2.59%, sys=8.66%, ctx=156, majf=0, minf=10 00:09:29.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:29.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.965 issued rwts: total=2560,3007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.965 job3: (groupid=0, jobs=1): err= 0: pid=66165: Tue Nov 12 10:31:18 2024 00:09:29.965 read: IOPS=5022, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1004msec) 00:09:29.965 slat (usec): min=8, max=5907, avg=98.04, stdev=535.74 00:09:29.965 clat (usec): min=1387, max=23756, avg=13132.71, stdev=1547.26 00:09:29.965 lat (usec): min=3786, max=29643, avg=13230.75, stdev=1555.52 00:09:29.965 clat percentiles (usec): 00:09:29.965 | 1.00th=[ 6849], 5.00th=[10945], 10.00th=[12125], 20.00th=[12649], 00:09:29.965 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:09:29.965 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[15008], 00:09:29.965 | 99.00th=[17433], 99.50th=[18220], 99.90th=[23725], 99.95th=[23725], 00:09:29.965 | 99.99th=[23725] 00:09:29.965 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:09:29.965 slat (usec): min=6, max=9555, avg=91.13, stdev=545.66 00:09:29.965 clat (usec): min=6277, max=19756, avg=11917.24, stdev=1245.42 00:09:29.965 lat (usec): min=8155, max=19970, avg=12008.36, stdev=1144.07 00:09:29.965 clat percentiles (usec): 00:09:29.965 | 1.00th=[ 7898], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[11338], 00:09:29.965 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:09:29.965 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12649], 95.00th=[13435], 00:09:29.965 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19792], 99.95th=[19792], 00:09:29.965 | 99.99th=[19792] 00:09:29.966 bw ( KiB/s): min=20480, max=20480, per=30.04%, avg=20480.00, stdev= 0.00, samples=2 00:09:29.966 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:29.966 lat (msec) : 2=0.01%, 4=0.13%, 10=4.35%, 20=95.36%, 50=0.16% 00:09:29.966 cpu : usr=4.79%, sys=13.56%, ctx=259, majf=0, minf=11 00:09:29.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:29.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.966 issued rwts: total=5043,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.966 00:09:29.966 Run status group 0 (all jobs): 00:09:29.966 READ: bw=61.3MiB/s (64.3MB/s), 9.94MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=61.7MiB (64.7MB), run=1004-1006msec 00:09:29.966 WRITE: bw=66.6MiB/s (69.8MB/s), 11.7MiB/s-23.4MiB/s (12.2MB/s-24.5MB/s), io=67.0MiB (70.2MB), run=1004-1006msec 00:09:29.966 00:09:29.966 Disk stats (read/write): 00:09:29.966 nvme0n1: ios=4860/5120, merge=0/0, ticks=51719/48508, in_queue=100227, util=87.98% 00:09:29.966 nvme0n2: ios=2152/2560, merge=0/0, ticks=48854/53409, in_queue=102263, util=88.46% 00:09:29.966 nvme0n3: ios=2116/2560, merge=0/0, ticks=48958/53730, in_queue=102688, util=89.14% 00:09:29.966 nvme0n4: ios=4113/4584, merge=0/0, ticks=51398/50557, in_queue=101955, util=89.92% 00:09:29.966 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:29.966 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66184 00:09:29.966 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:29.966 10:31:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:29.966 [global] 00:09:29.966 thread=1 00:09:29.966 invalidate=1 00:09:29.966 rw=read 00:09:29.966 time_based=1 00:09:29.966 runtime=10 00:09:29.966 ioengine=libaio 00:09:29.966 direct=1 00:09:29.966 bs=4096 00:09:29.966 iodepth=1 00:09:29.966 norandommap=1 00:09:29.966 numjobs=1 00:09:29.966 00:09:29.966 [job0] 00:09:29.966 filename=/dev/nvme0n1 00:09:29.966 [job1] 00:09:29.966 filename=/dev/nvme0n2 00:09:29.966 [job2] 00:09:29.966 filename=/dev/nvme0n3 00:09:29.966 [job3] 00:09:29.966 filename=/dev/nvme0n4 00:09:29.966 Could not set queue depth (nvme0n1) 00:09:29.966 Could not set queue depth (nvme0n2) 00:09:29.966 Could not set queue depth (nvme0n3) 00:09:29.966 Could not set queue depth (nvme0n4) 00:09:29.966 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.966 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.966 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.966 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.966 fio-3.35 00:09:29.966 Starting 4 threads 00:09:33.246 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:33.246 fio: pid=66231, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.246 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=45731840, buflen=4096 00:09:33.246 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:33.246 fio: pid=66230, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.246 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=47427584, buflen=4096 00:09:33.246 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.246 10:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:33.504 fio: pid=66227, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.504 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=58003456, buflen=4096 00:09:33.761 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.761 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:33.761 fio: pid=66229, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.761 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=58953728, buflen=4096 00:09:34.020 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.020 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:34.020 00:09:34.020 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66227: Tue Nov 12 10:31:22 2024 00:09:34.020 read: IOPS=4050, BW=15.8MiB/s (16.6MB/s)(55.3MiB/3496msec) 00:09:34.020 slat (usec): min=8, max=18382, avg=18.58, stdev=242.23 00:09:34.020 clat (usec): min=3, max=3113, avg=226.87, stdev=58.17 00:09:34.020 lat (usec): min=135, max=18557, avg=245.45, stdev=248.41 00:09:34.020 clat percentiles (usec): 00:09:34.020 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 151], 20.00th=[ 184], 00:09:34.020 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:09:34.020 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:09:34.020 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 416], 99.95th=[ 660], 00:09:34.020 | 99.99th=[ 2671] 00:09:34.020 bw ( KiB/s): min=15264, max=15418, per=28.24%, avg=15333.67, stdev=61.19, samples=6 00:09:34.020 iops : min= 3816, max= 3854, avg=3833.33, stdev=15.16, samples=6 00:09:34.020 lat (usec) : 4=0.01%, 100=0.01%, 250=71.41%, 500=28.49%, 750=0.03% 00:09:34.020 lat (msec) : 2=0.03%, 4=0.01% 00:09:34.020 cpu : usr=1.23%, sys=5.52%, ctx=14185, majf=0, minf=1 00:09:34.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.020 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.020 issued rwts: total=14162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.020 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66229: Tue Nov 12 10:31:22 2024 00:09:34.020 read: IOPS=3808, BW=14.9MiB/s (15.6MB/s)(56.2MiB/3779msec) 00:09:34.020 slat (usec): min=11, max=15869, avg=20.71, stdev=249.06 00:09:34.020 clat (usec): min=118, max=11089, avg=240.13, stdev=110.11 00:09:34.020 lat (usec): min=132, max=16044, avg=260.84, stdev=271.83 00:09:34.020 clat percentiles (usec): 00:09:34.020 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 149], 20.00th=[ 194], 00:09:34.020 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:09:34.020 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:09:34.020 | 99.00th=[ 363], 99.50th=[ 429], 99.90th=[ 898], 99.95th=[ 1369], 00:09:34.020 | 99.99th=[ 2040] 00:09:34.020 bw ( KiB/s): min=14095, max=17377, per=27.32%, avg=14834.29, stdev=1143.45, samples=7 00:09:34.020 iops : min= 3523, max= 4344, avg=3708.43, stdev=285.85, samples=7 00:09:34.020 lat (usec) : 250=50.70%, 500=49.01%, 750=0.17%, 1000=0.03% 00:09:34.020 lat (msec) : 2=0.07%, 4=0.01%, 20=0.01% 00:09:34.020 cpu : usr=1.14%, sys=5.21%, ctx=14408, majf=0, minf=2 00:09:34.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.020 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.020 issued rwts: total=14394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.020 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66230: Tue Nov 12 10:31:22 2024 00:09:34.020 read: IOPS=3582, BW=14.0MiB/s (14.7MB/s)(45.2MiB/3232msec) 00:09:34.020 slat (usec): min=12, max=10634, avg=17.26, stdev=121.25 00:09:34.020 clat (usec): min=137, max=2119, avg=260.20, stdev=53.98 00:09:34.020 lat (usec): min=150, max=11033, avg=277.46, stdev=134.20 00:09:34.020 clat percentiles (usec): 00:09:34.020 | 1.00th=[ 188], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 239], 00:09:34.020 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:09:34.020 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:09:34.020 | 99.00th=[ 338], 99.50th=[ 474], 99.90th=[ 1106], 99.95th=[ 1516], 00:09:34.020 | 99.99th=[ 2073] 00:09:34.020 bw ( KiB/s): min=13904, max=14792, per=26.49%, avg=14381.33, stdev=296.57, samples=6 00:09:34.020 iops : min= 3476, max= 3698, avg=3595.33, stdev=74.14, samples=6 00:09:34.020 lat (usec) : 250=39.46%, 500=60.10%, 750=0.24%, 1000=0.06% 00:09:34.020 lat (msec) : 2=0.10%, 4=0.02% 00:09:34.020 cpu : usr=1.11%, sys=4.83%, ctx=11583, majf=0, minf=2 00:09:34.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.020 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.020 issued rwts: total=11580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.020 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66231: Tue Nov 12 10:31:22 2024 00:09:34.020 read: IOPS=3825, BW=14.9MiB/s (15.7MB/s)(43.6MiB/2919msec) 00:09:34.020 slat (nsec): min=8542, max=94162, avg=11708.71, stdev=3938.35 00:09:34.020 clat (usec): min=197, max=1704, avg=248.45, stdev=29.61 00:09:34.020 lat (usec): min=210, max=1714, avg=260.16, stdev=29.79 00:09:34.020 clat percentiles (usec): 00:09:34.020 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:09:34.020 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:09:34.020 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:09:34.020 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 355], 99.95th=[ 424], 00:09:34.021 | 99.99th=[ 1582] 00:09:34.021 bw ( KiB/s): min=15256, max=15376, per=28.21%, avg=15316.80, stdev=54.73, samples=5 00:09:34.021 iops : min= 3814, max= 3844, avg=3829.20, stdev=13.68, samples=5 00:09:34.021 lat (usec) : 250=57.97%, 500=41.98%, 750=0.01% 00:09:34.021 lat (msec) : 2=0.03% 00:09:34.021 cpu : usr=1.30%, sys=3.84%, ctx=11169, majf=0, minf=2 00:09:34.021 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.021 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.021 issued rwts: total=11166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.021 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.021 00:09:34.021 Run status group 0 (all jobs): 00:09:34.021 READ: bw=53.0MiB/s (55.6MB/s), 14.0MiB/s-15.8MiB/s (14.7MB/s-16.6MB/s), io=200MiB (210MB), run=2919-3779msec 00:09:34.021 00:09:34.021 Disk stats (read/write): 00:09:34.021 nvme0n1: ios=13386/0, merge=0/0, ticks=3069/0, in_queue=3069, util=94.76% 00:09:34.021 nvme0n2: ios=13465/0, merge=0/0, ticks=3371/0, in_queue=3371, util=95.13% 00:09:34.021 nvme0n3: ios=11170/0, merge=0/0, ticks=2965/0, in_queue=2965, util=96.36% 00:09:34.021 nvme0n4: ios=10972/0, merge=0/0, ticks=2538/0, in_queue=2538, util=96.76% 00:09:34.279 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.279 10:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:34.535 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.536 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:34.793 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.793 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:35.051 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.051 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66184 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.310 nvmf hotplug test: fio failed as expected 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:35.310 10:31:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.568 rmmod nvme_tcp 00:09:35.568 rmmod nvme_fabrics 00:09:35.568 rmmod nvme_keyring 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 65797 ']' 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 65797 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 65797 ']' 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 65797 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:35.568 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65797 00:09:35.827 killing process with pid 65797 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65797' 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 65797 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 65797 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:35.827 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:35.828 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.828 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:35.828 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:35.828 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:35.828 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:35.828 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:36.087 ************************************ 00:09:36.087 END TEST nvmf_fio_target 00:09:36.087 ************************************ 00:09:36.087 00:09:36.087 real 0m20.192s 00:09:36.087 user 1m16.263s 00:09:36.087 sys 0m10.189s 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.087 ************************************ 00:09:36.087 START TEST nvmf_bdevio 00:09:36.087 ************************************ 00:09:36.087 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:36.087 * Looking for test storage... 00:09:36.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:36.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.348 --rc genhtml_branch_coverage=1 00:09:36.348 --rc genhtml_function_coverage=1 00:09:36.348 --rc genhtml_legend=1 00:09:36.348 --rc geninfo_all_blocks=1 00:09:36.348 --rc geninfo_unexecuted_blocks=1 00:09:36.348 00:09:36.348 ' 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:36.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.348 --rc genhtml_branch_coverage=1 00:09:36.348 --rc genhtml_function_coverage=1 00:09:36.348 --rc genhtml_legend=1 00:09:36.348 --rc geninfo_all_blocks=1 00:09:36.348 --rc geninfo_unexecuted_blocks=1 00:09:36.348 00:09:36.348 ' 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:36.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.348 --rc genhtml_branch_coverage=1 00:09:36.348 --rc genhtml_function_coverage=1 00:09:36.348 --rc genhtml_legend=1 00:09:36.348 --rc geninfo_all_blocks=1 00:09:36.348 --rc geninfo_unexecuted_blocks=1 00:09:36.348 00:09:36.348 ' 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:36.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.348 --rc genhtml_branch_coverage=1 00:09:36.348 --rc genhtml_function_coverage=1 00:09:36.348 --rc genhtml_legend=1 00:09:36.348 --rc geninfo_all_blocks=1 00:09:36.348 --rc geninfo_unexecuted_blocks=1 00:09:36.348 00:09:36.348 ' 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.348 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.349 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:36.349 Cannot find device "nvmf_init_br" 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:36.349 Cannot find device "nvmf_init_br2" 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:36.349 10:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:36.349 Cannot find device "nvmf_tgt_br" 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.349 Cannot find device "nvmf_tgt_br2" 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:36.349 Cannot find device "nvmf_init_br" 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:36.349 Cannot find device "nvmf_init_br2" 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:36.349 Cannot find device "nvmf_tgt_br" 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:36.349 Cannot find device "nvmf_tgt_br2" 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:36.349 Cannot find device "nvmf_br" 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:36.349 Cannot find device "nvmf_init_if" 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:36.349 Cannot find device "nvmf_init_if2" 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:36.349 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:36.608 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:36.609 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:36.609 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:09:36.609 00:09:36.609 --- 10.0.0.3 ping statistics --- 00:09:36.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.609 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:36.609 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:36.609 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:09:36.609 00:09:36.609 --- 10.0.0.4 ping statistics --- 00:09:36.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.609 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:36.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:36.609 00:09:36.609 --- 10.0.0.1 ping statistics --- 00:09:36.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.609 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:36.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:09:36.609 00:09:36.609 --- 10.0.0.2 ping statistics --- 00:09:36.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.609 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:36.609 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66556 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66556 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 66556 ']' 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:36.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:36.868 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:36.868 [2024-11-12 10:31:25.454790] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:09:36.868 [2024-11-12 10:31:25.454888] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.868 [2024-11-12 10:31:25.603910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.127 [2024-11-12 10:31:25.639669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.127 [2024-11-12 10:31:25.639743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.127 [2024-11-12 10:31:25.639771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.127 [2024-11-12 10:31:25.639780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.127 [2024-11-12 10:31:25.639787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.127 [2024-11-12 10:31:25.640658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:37.127 [2024-11-12 10:31:25.640740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:37.127 [2024-11-12 10:31:25.640863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:37.127 [2024-11-12 10:31:25.640867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.127 [2024-11-12 10:31:25.672104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:37.127 [2024-11-12 10:31:25.808604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:37.127 Malloc0 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:37.127 [2024-11-12 10:31:25.867840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:37.127 { 00:09:37.127 "params": { 00:09:37.127 "name": "Nvme$subsystem", 00:09:37.127 "trtype": "$TEST_TRANSPORT", 00:09:37.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.127 "adrfam": "ipv4", 00:09:37.127 "trsvcid": "$NVMF_PORT", 00:09:37.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.127 "hdgst": ${hdgst:-false}, 00:09:37.127 "ddgst": ${ddgst:-false} 00:09:37.127 }, 00:09:37.127 "method": "bdev_nvme_attach_controller" 00:09:37.127 } 00:09:37.127 EOF 00:09:37.127 )") 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:37.127 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:37.386 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:37.386 10:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:37.386 "params": { 00:09:37.386 "name": "Nvme1", 00:09:37.386 "trtype": "tcp", 00:09:37.386 "traddr": "10.0.0.3", 00:09:37.386 "adrfam": "ipv4", 00:09:37.386 "trsvcid": "4420", 00:09:37.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.386 "hdgst": false, 00:09:37.386 "ddgst": false 00:09:37.386 }, 00:09:37.386 "method": "bdev_nvme_attach_controller" 00:09:37.386 }' 00:09:37.386 [2024-11-12 10:31:25.930080] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:09:37.386 [2024-11-12 10:31:25.930219] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66585 ] 00:09:37.386 [2024-11-12 10:31:26.079015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:37.386 [2024-11-12 10:31:26.111838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.386 [2024-11-12 10:31:26.111895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.386 [2024-11-12 10:31:26.111898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.645 [2024-11-12 10:31:26.151761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:37.645 I/O targets: 00:09:37.645 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:37.645 00:09:37.645 00:09:37.645 CUnit - A unit testing framework for C - Version 2.1-3 00:09:37.645 http://cunit.sourceforge.net/ 00:09:37.645 00:09:37.645 00:09:37.645 Suite: bdevio tests on: Nvme1n1 00:09:37.645 Test: blockdev write read block ...passed 00:09:37.645 Test: blockdev write zeroes read block ...passed 00:09:37.645 Test: blockdev write zeroes read no split ...passed 00:09:37.645 Test: blockdev write zeroes read split ...passed 00:09:37.645 Test: blockdev write zeroes read split partial ...passed 00:09:37.645 Test: blockdev reset ...[2024-11-12 10:31:26.287375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:37.645 [2024-11-12 10:31:26.287473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139c180 (9): Bad file descriptor 00:09:37.645 passed 00:09:37.645 Test: blockdev write read 8 blocks ...[2024-11-12 10:31:26.304111] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:37.645 passed 00:09:37.645 Test: blockdev write read size > 128k ...passed 00:09:37.645 Test: blockdev write read invalid size ...passed 00:09:37.645 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:37.645 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:37.645 Test: blockdev write read max offset ...passed 00:09:37.645 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:37.645 Test: blockdev writev readv 8 blocks ...passed 00:09:37.645 Test: blockdev writev readv 30 x 1block ...passed 00:09:37.645 Test: blockdev writev readv block ...passed 00:09:37.645 Test: blockdev writev readv size > 128k ...passed 00:09:37.645 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:37.645 Test: blockdev comparev and writev ...[2024-11-12 10:31:26.315303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:37.645 [2024-11-12 10:31:26.315351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:37.645 [2024-11-12 10:31:26.315375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:37.645 [2024-11-12 10:31:26.315389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:37.645 passed 00:09:37.645 Test: blockdev nvme passthru rw ...[2024-11-12 10:31:26.315791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:37.645 [2024-11-12 10:31:26.315821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:37.645 [2024-11-12 10:31:26.315843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:37.645 [2024-11-12 10:31:26.315855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:37.645 [2024-11-12 10:31:26.316196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:37.645 [2024-11-12 10:31:26.316220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:37.645 [2024-11-12 10:31:26.316241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:37.645 [2024-11-12 10:31:26.316253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:37.645 [2024-11-12 10:31:26.316548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:37.645 [2024-11-12 10:31:26.316568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:37.645 [2024-11-12 10:31:26.316588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:37.645 [2024-11-12 10:31:26.316600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:37.645 passed 00:09:37.645 Test: blockdev nvme passthru vendor specific ...[2024-11-12 10:31:26.317820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:37.645 [2024-11-12 10:31:26.317959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:37.645 [2024-11-12 10:31:26.318454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:37.645 [2024-11-12 10:31:26.318488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:37.645 [2024-11-12 10:31:26.318635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:37.645 [2024-11-12 10:31:26.318660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:37.645 passed 00:09:37.645 Test: blockdev nvme admin passthru ...[2024-11-12 10:31:26.318785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:37.645 [2024-11-12 10:31:26.318809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:37.645 passed 00:09:37.645 Test: blockdev copy ...passed 00:09:37.645 00:09:37.645 Run Summary: Type Total Ran Passed Failed Inactive 00:09:37.645 suites 1 1 n/a 0 0 00:09:37.645 tests 23 23 23 0 0 00:09:37.645 asserts 152 152 152 0 n/a 00:09:37.645 00:09:37.645 Elapsed time = 0.157 seconds 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:37.904 rmmod nvme_tcp 00:09:37.904 rmmod nvme_fabrics 00:09:37.904 rmmod nvme_keyring 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66556 ']' 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66556 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 66556 ']' 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 66556 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66556 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:09:37.904 killing process with pid 66556 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66556' 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 66556 00:09:37.904 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 66556 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:38.174 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:38.469 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:38.469 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:38.469 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:38.469 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.469 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.469 10:31:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.469 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:38.469 00:09:38.469 real 0m2.259s 00:09:38.469 user 0m5.706s 00:09:38.469 sys 0m0.735s 00:09:38.469 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:38.469 ************************************ 00:09:38.469 END TEST nvmf_bdevio 00:09:38.469 ************************************ 00:09:38.469 10:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.469 10:31:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:38.469 ************************************ 00:09:38.469 END TEST nvmf_target_core 00:09:38.469 ************************************ 00:09:38.469 00:09:38.469 real 2m31.391s 00:09:38.469 user 6m38.979s 00:09:38.469 sys 0m53.327s 00:09:38.469 10:31:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:38.469 10:31:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.469 10:31:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:38.469 10:31:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:38.469 10:31:27 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:38.469 10:31:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:38.469 ************************************ 00:09:38.469 START TEST nvmf_target_extra 00:09:38.469 ************************************ 00:09:38.469 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:38.469 * Looking for test storage... 00:09:38.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:38.469 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:38.469 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:38.469 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:38.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.737 --rc genhtml_branch_coverage=1 00:09:38.737 --rc genhtml_function_coverage=1 00:09:38.737 --rc genhtml_legend=1 00:09:38.737 --rc geninfo_all_blocks=1 00:09:38.737 --rc geninfo_unexecuted_blocks=1 00:09:38.737 00:09:38.737 ' 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:38.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.737 --rc genhtml_branch_coverage=1 00:09:38.737 --rc genhtml_function_coverage=1 00:09:38.737 --rc genhtml_legend=1 00:09:38.737 --rc geninfo_all_blocks=1 00:09:38.737 --rc geninfo_unexecuted_blocks=1 00:09:38.737 00:09:38.737 ' 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:38.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.737 --rc genhtml_branch_coverage=1 00:09:38.737 --rc genhtml_function_coverage=1 00:09:38.737 --rc genhtml_legend=1 00:09:38.737 --rc geninfo_all_blocks=1 00:09:38.737 --rc geninfo_unexecuted_blocks=1 00:09:38.737 00:09:38.737 ' 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:38.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.737 --rc genhtml_branch_coverage=1 00:09:38.737 --rc genhtml_function_coverage=1 00:09:38.737 --rc genhtml_legend=1 00:09:38.737 --rc geninfo_all_blocks=1 00:09:38.737 --rc geninfo_unexecuted_blocks=1 00:09:38.737 00:09:38.737 ' 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.737 10:31:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:38.738 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:38.738 ************************************ 00:09:38.738 START TEST nvmf_auth_target 00:09:38.738 ************************************ 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:38.738 * Looking for test storage... 00:09:38.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:38.738 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:38.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.999 --rc genhtml_branch_coverage=1 00:09:38.999 --rc genhtml_function_coverage=1 00:09:38.999 --rc genhtml_legend=1 00:09:38.999 --rc geninfo_all_blocks=1 00:09:38.999 --rc geninfo_unexecuted_blocks=1 00:09:38.999 00:09:38.999 ' 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:38.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.999 --rc genhtml_branch_coverage=1 00:09:38.999 --rc genhtml_function_coverage=1 00:09:38.999 --rc genhtml_legend=1 00:09:38.999 --rc geninfo_all_blocks=1 00:09:38.999 --rc geninfo_unexecuted_blocks=1 00:09:38.999 00:09:38.999 ' 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:38.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.999 --rc genhtml_branch_coverage=1 00:09:38.999 --rc genhtml_function_coverage=1 00:09:38.999 --rc genhtml_legend=1 00:09:38.999 --rc geninfo_all_blocks=1 00:09:38.999 --rc geninfo_unexecuted_blocks=1 00:09:38.999 00:09:38.999 ' 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:38.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.999 --rc genhtml_branch_coverage=1 00:09:38.999 --rc genhtml_function_coverage=1 00:09:38.999 --rc genhtml_legend=1 00:09:38.999 --rc geninfo_all_blocks=1 00:09:38.999 --rc geninfo_unexecuted_blocks=1 00:09:38.999 00:09:38.999 ' 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:38.999 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.000 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:39.000 Cannot find device "nvmf_init_br" 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:39.000 Cannot find device "nvmf_init_br2" 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:39.000 Cannot find device "nvmf_tgt_br" 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:39.000 Cannot find device "nvmf_tgt_br2" 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:39.000 Cannot find device "nvmf_init_br" 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:39.000 Cannot find device "nvmf_init_br2" 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:39.000 Cannot find device "nvmf_tgt_br" 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:39.000 Cannot find device "nvmf_tgt_br2" 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:39.000 Cannot find device "nvmf_br" 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:39.000 Cannot find device "nvmf_init_if" 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:39.000 Cannot find device "nvmf_init_if2" 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:39.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:39.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:39.000 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:39.260 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:39.260 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:39.260 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:39.260 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:39.260 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:39.260 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:39.260 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:39.260 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:39.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:39.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:09:39.261 00:09:39.261 --- 10.0.0.3 ping statistics --- 00:09:39.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.261 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:39.261 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:39.261 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:09:39.261 00:09:39.261 --- 10.0.0.4 ping statistics --- 00:09:39.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.261 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:39.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:09:39.261 00:09:39.261 --- 10.0.0.1 ping statistics --- 00:09:39.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.261 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:39.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:09:39.261 00:09:39.261 --- 10.0.0.2 ping statistics --- 00:09:39.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.261 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:39.261 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:39.261 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:39.261 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:39.261 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.261 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=66868 00:09:39.261 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:39.261 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 66868 00:09:39.261 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 66868 ']' 00:09:39.261 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.261 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:39.261 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.261 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:39.261 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=66892 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fc610ffef99c8828b3b56380634b241f02e44c4ed57b26b3 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Wr2 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fc610ffef99c8828b3b56380634b241f02e44c4ed57b26b3 0 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fc610ffef99c8828b3b56380634b241f02e44c4ed57b26b3 0 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fc610ffef99c8828b3b56380634b241f02e44c4ed57b26b3 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Wr2 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Wr2 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Wr2 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=77fc9c389d9c96447d70f15868181e709baf816046c7c6b07f37818fd623e806 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.CCi 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 77fc9c389d9c96447d70f15868181e709baf816046c7c6b07f37818fd623e806 3 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 77fc9c389d9c96447d70f15868181e709baf816046c7c6b07f37818fd623e806 3 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=77fc9c389d9c96447d70f15868181e709baf816046c7c6b07f37818fd623e806 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.CCi 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.CCi 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.CCi 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=652cf351bf6daf063b5505c6a945192b 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.OYB 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 652cf351bf6daf063b5505c6a945192b 1 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 652cf351bf6daf063b5505c6a945192b 1 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=652cf351bf6daf063b5505c6a945192b 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:39.829 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.OYB 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.OYB 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.OYB 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bf98f1debd567080a1d8887843c264e86da240b11331a9e9 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5Ch 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bf98f1debd567080a1d8887843c264e86da240b11331a9e9 2 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bf98f1debd567080a1d8887843c264e86da240b11331a9e9 2 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bf98f1debd567080a1d8887843c264e86da240b11331a9e9 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5Ch 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5Ch 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.5Ch 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e138d880f0a65dfd3fc18de5ce7161a588958cb13b286442 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OSG 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e138d880f0a65dfd3fc18de5ce7161a588958cb13b286442 2 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e138d880f0a65dfd3fc18de5ce7161a588958cb13b286442 2 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e138d880f0a65dfd3fc18de5ce7161a588958cb13b286442 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OSG 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OSG 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.OSG 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:40.089 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eaea98953bbb8a03e728f5d6936af81a 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cxg 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eaea98953bbb8a03e728f5d6936af81a 1 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eaea98953bbb8a03e728f5d6936af81a 1 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eaea98953bbb8a03e728f5d6936af81a 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cxg 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cxg 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.cxg 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d2494cf5899c9f1981f6e39cfee844d08dfae62730789af7e937ad362dbd0016 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.NPr 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d2494cf5899c9f1981f6e39cfee844d08dfae62730789af7e937ad362dbd0016 3 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d2494cf5899c9f1981f6e39cfee844d08dfae62730789af7e937ad362dbd0016 3 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d2494cf5899c9f1981f6e39cfee844d08dfae62730789af7e937ad362dbd0016 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:40.090 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:40.349 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.NPr 00:09:40.349 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.NPr 00:09:40.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.349 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.NPr 00:09:40.349 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:40.349 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 66868 00:09:40.349 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 66868 ']' 00:09:40.349 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.349 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:40.349 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.349 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:40.349 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:40.607 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:40.607 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:09:40.607 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 66892 /var/tmp/host.sock 00:09:40.607 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 66892 ']' 00:09:40.607 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:09:40.607 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:40.607 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:40.607 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:40.607 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Wr2 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Wr2 00:09:40.867 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Wr2 00:09:41.125 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.CCi ]] 00:09:41.125 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CCi 00:09:41.125 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.125 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.125 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.125 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CCi 00:09:41.125 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CCi 00:09:41.384 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:41.384 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OYB 00:09:41.384 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.384 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.384 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.384 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.OYB 00:09:41.384 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.OYB 00:09:41.642 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.5Ch ]] 00:09:41.642 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5Ch 00:09:41.642 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.642 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.642 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.642 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5Ch 00:09:41.642 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5Ch 00:09:41.901 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:41.901 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OSG 00:09:41.902 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.902 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.902 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.902 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.OSG 00:09:41.902 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.OSG 00:09:42.160 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.cxg ]] 00:09:42.160 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cxg 00:09:42.160 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.160 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.160 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.160 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cxg 00:09:42.160 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cxg 00:09:42.418 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:42.418 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.NPr 00:09:42.419 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.419 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.419 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.419 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.NPr 00:09:42.419 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.NPr 00:09:42.677 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:42.677 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:42.677 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:42.677 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:42.677 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:42.677 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.936 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:43.195 00:09:43.195 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:43.195 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:43.195 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:43.453 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:43.454 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:43.454 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.454 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.454 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.454 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:43.454 { 00:09:43.454 "cntlid": 1, 00:09:43.454 "qid": 0, 00:09:43.454 "state": "enabled", 00:09:43.454 "thread": "nvmf_tgt_poll_group_000", 00:09:43.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:09:43.454 "listen_address": { 00:09:43.454 "trtype": "TCP", 00:09:43.454 "adrfam": "IPv4", 00:09:43.454 "traddr": "10.0.0.3", 00:09:43.454 "trsvcid": "4420" 00:09:43.454 }, 00:09:43.454 "peer_address": { 00:09:43.454 "trtype": "TCP", 00:09:43.454 "adrfam": "IPv4", 00:09:43.454 "traddr": "10.0.0.1", 00:09:43.454 "trsvcid": "59008" 00:09:43.454 }, 00:09:43.454 "auth": { 00:09:43.454 "state": "completed", 00:09:43.454 "digest": "sha256", 00:09:43.454 "dhgroup": "null" 00:09:43.454 } 00:09:43.454 } 00:09:43.454 ]' 00:09:43.454 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:43.712 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:43.712 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:43.712 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:43.712 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:43.712 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:43.712 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:43.712 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:43.971 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:09:43.971 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:09:48.157 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:48.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:48.416 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:48.416 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.416 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.416 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.416 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:48.416 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:48.416 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:48.674 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:48.933 00:09:48.933 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:48.933 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:48.933 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:49.192 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:49.192 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:49.192 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.192 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.192 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.192 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:49.192 { 00:09:49.192 "cntlid": 3, 00:09:49.192 "qid": 0, 00:09:49.192 "state": "enabled", 00:09:49.192 "thread": "nvmf_tgt_poll_group_000", 00:09:49.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:09:49.192 "listen_address": { 00:09:49.192 "trtype": "TCP", 00:09:49.192 "adrfam": "IPv4", 00:09:49.192 "traddr": "10.0.0.3", 00:09:49.192 "trsvcid": "4420" 00:09:49.192 }, 00:09:49.192 "peer_address": { 00:09:49.192 "trtype": "TCP", 00:09:49.192 "adrfam": "IPv4", 00:09:49.192 "traddr": "10.0.0.1", 00:09:49.192 "trsvcid": "59030" 00:09:49.192 }, 00:09:49.192 "auth": { 00:09:49.192 "state": "completed", 00:09:49.192 "digest": "sha256", 00:09:49.192 "dhgroup": "null" 00:09:49.192 } 00:09:49.192 } 00:09:49.192 ]' 00:09:49.192 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:49.192 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:49.192 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:49.450 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:49.450 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:49.450 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:49.450 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:49.450 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:49.709 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:09:49.709 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:50.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.645 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.904 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.904 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:50.904 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:50.904 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:51.162 00:09:51.162 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:51.162 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:51.162 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:51.421 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:51.421 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:51.421 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.421 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.421 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.421 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:51.421 { 00:09:51.421 "cntlid": 5, 00:09:51.421 "qid": 0, 00:09:51.421 "state": "enabled", 00:09:51.421 "thread": "nvmf_tgt_poll_group_000", 00:09:51.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:09:51.421 "listen_address": { 00:09:51.421 "trtype": "TCP", 00:09:51.421 "adrfam": "IPv4", 00:09:51.421 "traddr": "10.0.0.3", 00:09:51.421 "trsvcid": "4420" 00:09:51.421 }, 00:09:51.421 "peer_address": { 00:09:51.421 "trtype": "TCP", 00:09:51.421 "adrfam": "IPv4", 00:09:51.421 "traddr": "10.0.0.1", 00:09:51.421 "trsvcid": "59052" 00:09:51.421 }, 00:09:51.421 "auth": { 00:09:51.421 "state": "completed", 00:09:51.421 "digest": "sha256", 00:09:51.421 "dhgroup": "null" 00:09:51.421 } 00:09:51.421 } 00:09:51.421 ]' 00:09:51.421 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:51.421 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:51.421 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:51.421 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:51.421 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:51.680 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:51.680 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:51.680 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:51.939 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:09:51.939 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:09:52.505 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:52.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:52.505 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:52.505 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.506 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.506 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.506 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:52.506 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:52.506 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:52.764 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:53.330 00:09:53.330 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:53.330 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:53.330 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:53.589 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:53.589 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:53.589 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.589 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.589 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.589 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:53.589 { 00:09:53.589 "cntlid": 7, 00:09:53.589 "qid": 0, 00:09:53.589 "state": "enabled", 00:09:53.589 "thread": "nvmf_tgt_poll_group_000", 00:09:53.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:09:53.589 "listen_address": { 00:09:53.589 "trtype": "TCP", 00:09:53.589 "adrfam": "IPv4", 00:09:53.590 "traddr": "10.0.0.3", 00:09:53.590 "trsvcid": "4420" 00:09:53.590 }, 00:09:53.590 "peer_address": { 00:09:53.590 "trtype": "TCP", 00:09:53.590 "adrfam": "IPv4", 00:09:53.590 "traddr": "10.0.0.1", 00:09:53.590 "trsvcid": "59072" 00:09:53.590 }, 00:09:53.590 "auth": { 00:09:53.590 "state": "completed", 00:09:53.590 "digest": "sha256", 00:09:53.590 "dhgroup": "null" 00:09:53.590 } 00:09:53.590 } 00:09:53.590 ]' 00:09:53.590 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:53.590 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:53.590 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:53.590 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:53.590 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:53.590 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:53.590 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:53.590 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:53.848 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:09:53.848 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:09:54.414 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:54.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:54.671 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:54.671 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.671 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.671 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.671 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:54.671 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:54.671 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:54.671 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:54.930 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:55.189 00:09:55.189 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:55.189 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:55.189 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:55.448 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:55.448 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:55.448 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.448 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.448 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.448 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:55.448 { 00:09:55.448 "cntlid": 9, 00:09:55.448 "qid": 0, 00:09:55.448 "state": "enabled", 00:09:55.448 "thread": "nvmf_tgt_poll_group_000", 00:09:55.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:09:55.448 "listen_address": { 00:09:55.448 "trtype": "TCP", 00:09:55.448 "adrfam": "IPv4", 00:09:55.448 "traddr": "10.0.0.3", 00:09:55.448 "trsvcid": "4420" 00:09:55.448 }, 00:09:55.448 "peer_address": { 00:09:55.448 "trtype": "TCP", 00:09:55.448 "adrfam": "IPv4", 00:09:55.448 "traddr": "10.0.0.1", 00:09:55.448 "trsvcid": "39290" 00:09:55.448 }, 00:09:55.448 "auth": { 00:09:55.448 "state": "completed", 00:09:55.448 "digest": "sha256", 00:09:55.448 "dhgroup": "ffdhe2048" 00:09:55.448 } 00:09:55.448 } 00:09:55.448 ]' 00:09:55.448 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:55.448 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:55.448 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:55.707 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:55.707 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:55.707 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:55.707 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:55.707 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:55.966 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:09:55.966 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:09:56.558 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:56.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:56.558 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:56.558 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.558 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.558 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.558 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:56.558 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:56.558 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:56.817 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.077 00:09:57.077 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:57.077 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.077 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:57.336 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.336 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.336 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.336 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.336 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.336 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:57.336 { 00:09:57.336 "cntlid": 11, 00:09:57.336 "qid": 0, 00:09:57.336 "state": "enabled", 00:09:57.336 "thread": "nvmf_tgt_poll_group_000", 00:09:57.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:09:57.336 "listen_address": { 00:09:57.336 "trtype": "TCP", 00:09:57.336 "adrfam": "IPv4", 00:09:57.336 "traddr": "10.0.0.3", 00:09:57.336 "trsvcid": "4420" 00:09:57.336 }, 00:09:57.336 "peer_address": { 00:09:57.336 "trtype": "TCP", 00:09:57.336 "adrfam": "IPv4", 00:09:57.336 "traddr": "10.0.0.1", 00:09:57.336 "trsvcid": "39318" 00:09:57.336 }, 00:09:57.336 "auth": { 00:09:57.336 "state": "completed", 00:09:57.336 "digest": "sha256", 00:09:57.336 "dhgroup": "ffdhe2048" 00:09:57.336 } 00:09:57.336 } 00:09:57.336 ]' 00:09:57.336 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:57.595 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:57.595 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:57.595 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:57.595 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:57.595 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:57.595 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:57.595 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:57.853 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:09:57.853 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:09:58.421 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.421 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:09:58.421 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.421 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.421 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.421 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:58.421 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:58.421 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:58.679 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:58.679 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:58.679 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:58.679 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:58.679 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:58.679 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:58.679 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:58.679 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.679 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.679 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.679 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:58.679 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:58.680 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:59.247 00:09:59.247 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.247 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.247 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:59.505 { 00:09:59.505 "cntlid": 13, 00:09:59.505 "qid": 0, 00:09:59.505 "state": "enabled", 00:09:59.505 "thread": "nvmf_tgt_poll_group_000", 00:09:59.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:09:59.505 "listen_address": { 00:09:59.505 "trtype": "TCP", 00:09:59.505 "adrfam": "IPv4", 00:09:59.505 "traddr": "10.0.0.3", 00:09:59.505 "trsvcid": "4420" 00:09:59.505 }, 00:09:59.505 "peer_address": { 00:09:59.505 "trtype": "TCP", 00:09:59.505 "adrfam": "IPv4", 00:09:59.505 "traddr": "10.0.0.1", 00:09:59.505 "trsvcid": "39346" 00:09:59.505 }, 00:09:59.505 "auth": { 00:09:59.505 "state": "completed", 00:09:59.505 "digest": "sha256", 00:09:59.505 "dhgroup": "ffdhe2048" 00:09:59.505 } 00:09:59.505 } 00:09:59.505 ]' 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:59.505 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:59.764 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:09:59.764 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:00.699 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:00.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:00.699 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:00.699 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.699 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.699 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.699 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:00.699 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:00.699 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:00.959 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:01.225 00:10:01.225 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.225 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.225 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.484 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.484 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.484 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.484 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.484 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.484 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:01.484 { 00:10:01.484 "cntlid": 15, 00:10:01.484 "qid": 0, 00:10:01.484 "state": "enabled", 00:10:01.484 "thread": "nvmf_tgt_poll_group_000", 00:10:01.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:01.484 "listen_address": { 00:10:01.484 "trtype": "TCP", 00:10:01.484 "adrfam": "IPv4", 00:10:01.484 "traddr": "10.0.0.3", 00:10:01.484 "trsvcid": "4420" 00:10:01.484 }, 00:10:01.484 "peer_address": { 00:10:01.484 "trtype": "TCP", 00:10:01.484 "adrfam": "IPv4", 00:10:01.484 "traddr": "10.0.0.1", 00:10:01.484 "trsvcid": "39370" 00:10:01.484 }, 00:10:01.484 "auth": { 00:10:01.484 "state": "completed", 00:10:01.484 "digest": "sha256", 00:10:01.484 "dhgroup": "ffdhe2048" 00:10:01.484 } 00:10:01.484 } 00:10:01.484 ]' 00:10:01.484 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:01.484 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:01.484 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:01.743 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:01.743 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:01.743 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.743 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.743 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.001 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:02.002 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:02.569 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:02.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:02.569 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:02.569 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.569 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.569 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.569 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:02.569 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:02.569 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:02.569 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:02.827 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:02.828 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:02.828 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:02.828 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:02.828 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:02.828 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:02.828 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.828 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.828 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.828 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.828 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.828 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.828 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.086 00:10:03.345 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:03.345 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:03.345 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:03.604 { 00:10:03.604 "cntlid": 17, 00:10:03.604 "qid": 0, 00:10:03.604 "state": "enabled", 00:10:03.604 "thread": "nvmf_tgt_poll_group_000", 00:10:03.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:03.604 "listen_address": { 00:10:03.604 "trtype": "TCP", 00:10:03.604 "adrfam": "IPv4", 00:10:03.604 "traddr": "10.0.0.3", 00:10:03.604 "trsvcid": "4420" 00:10:03.604 }, 00:10:03.604 "peer_address": { 00:10:03.604 "trtype": "TCP", 00:10:03.604 "adrfam": "IPv4", 00:10:03.604 "traddr": "10.0.0.1", 00:10:03.604 "trsvcid": "39384" 00:10:03.604 }, 00:10:03.604 "auth": { 00:10:03.604 "state": "completed", 00:10:03.604 "digest": "sha256", 00:10:03.604 "dhgroup": "ffdhe3072" 00:10:03.604 } 00:10:03.604 } 00:10:03.604 ]' 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:03.604 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.171 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:04.171 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:04.738 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:04.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:04.738 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:04.738 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.738 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.738 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.738 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:04.738 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:04.738 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:04.996 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:04.996 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:04.996 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:04.996 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:04.996 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:04.996 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:04.996 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.997 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.997 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.997 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.997 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.997 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.997 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.255 00:10:05.255 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:05.255 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:05.255 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.514 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.514 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.514 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.514 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.514 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.514 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:05.514 { 00:10:05.514 "cntlid": 19, 00:10:05.514 "qid": 0, 00:10:05.514 "state": "enabled", 00:10:05.514 "thread": "nvmf_tgt_poll_group_000", 00:10:05.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:05.514 "listen_address": { 00:10:05.514 "trtype": "TCP", 00:10:05.514 "adrfam": "IPv4", 00:10:05.514 "traddr": "10.0.0.3", 00:10:05.514 "trsvcid": "4420" 00:10:05.514 }, 00:10:05.514 "peer_address": { 00:10:05.514 "trtype": "TCP", 00:10:05.514 "adrfam": "IPv4", 00:10:05.514 "traddr": "10.0.0.1", 00:10:05.514 "trsvcid": "48008" 00:10:05.514 }, 00:10:05.514 "auth": { 00:10:05.514 "state": "completed", 00:10:05.514 "digest": "sha256", 00:10:05.514 "dhgroup": "ffdhe3072" 00:10:05.514 } 00:10:05.514 } 00:10:05.514 ]' 00:10:05.514 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:05.772 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:05.772 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:05.772 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:05.772 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:05.772 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:05.772 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:05.772 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.031 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:06.031 10:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:06.966 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.966 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:06.966 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.966 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.966 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.966 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:06.966 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:06.967 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.227 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.486 00:10:07.486 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:07.486 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.486 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:08.055 { 00:10:08.055 "cntlid": 21, 00:10:08.055 "qid": 0, 00:10:08.055 "state": "enabled", 00:10:08.055 "thread": "nvmf_tgt_poll_group_000", 00:10:08.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:08.055 "listen_address": { 00:10:08.055 "trtype": "TCP", 00:10:08.055 "adrfam": "IPv4", 00:10:08.055 "traddr": "10.0.0.3", 00:10:08.055 "trsvcid": "4420" 00:10:08.055 }, 00:10:08.055 "peer_address": { 00:10:08.055 "trtype": "TCP", 00:10:08.055 "adrfam": "IPv4", 00:10:08.055 "traddr": "10.0.0.1", 00:10:08.055 "trsvcid": "48040" 00:10:08.055 }, 00:10:08.055 "auth": { 00:10:08.055 "state": "completed", 00:10:08.055 "digest": "sha256", 00:10:08.055 "dhgroup": "ffdhe3072" 00:10:08.055 } 00:10:08.055 } 00:10:08.055 ]' 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.055 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.314 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:08.315 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:08.882 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:08.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:08.882 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:08.883 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.883 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.883 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.883 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:08.883 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:08.883 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:09.451 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:09.711 00:10:09.711 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:09.711 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.711 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:09.970 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.970 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.970 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.970 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.970 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.970 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:09.970 { 00:10:09.970 "cntlid": 23, 00:10:09.970 "qid": 0, 00:10:09.970 "state": "enabled", 00:10:09.970 "thread": "nvmf_tgt_poll_group_000", 00:10:09.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:09.970 "listen_address": { 00:10:09.970 "trtype": "TCP", 00:10:09.970 "adrfam": "IPv4", 00:10:09.970 "traddr": "10.0.0.3", 00:10:09.970 "trsvcid": "4420" 00:10:09.970 }, 00:10:09.970 "peer_address": { 00:10:09.970 "trtype": "TCP", 00:10:09.970 "adrfam": "IPv4", 00:10:09.970 "traddr": "10.0.0.1", 00:10:09.970 "trsvcid": "48074" 00:10:09.970 }, 00:10:09.970 "auth": { 00:10:09.970 "state": "completed", 00:10:09.970 "digest": "sha256", 00:10:09.970 "dhgroup": "ffdhe3072" 00:10:09.970 } 00:10:09.970 } 00:10:09.970 ]' 00:10:09.970 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:09.970 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:09.970 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:09.970 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:09.970 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.229 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.229 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.229 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.488 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:10.488 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:11.065 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.065 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:11.065 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.065 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.065 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.065 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:11.065 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.065 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:11.065 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.326 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.585 00:10:11.585 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:11.585 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.585 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:11.844 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.844 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.844 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.844 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.844 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.844 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:11.844 { 00:10:11.844 "cntlid": 25, 00:10:11.844 "qid": 0, 00:10:11.844 "state": "enabled", 00:10:11.844 "thread": "nvmf_tgt_poll_group_000", 00:10:11.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:11.844 "listen_address": { 00:10:11.844 "trtype": "TCP", 00:10:11.844 "adrfam": "IPv4", 00:10:11.844 "traddr": "10.0.0.3", 00:10:11.844 "trsvcid": "4420" 00:10:11.844 }, 00:10:11.844 "peer_address": { 00:10:11.844 "trtype": "TCP", 00:10:11.844 "adrfam": "IPv4", 00:10:11.844 "traddr": "10.0.0.1", 00:10:11.844 "trsvcid": "48110" 00:10:11.844 }, 00:10:11.844 "auth": { 00:10:11.844 "state": "completed", 00:10:11.844 "digest": "sha256", 00:10:11.844 "dhgroup": "ffdhe4096" 00:10:11.844 } 00:10:11.844 } 00:10:11.844 ]' 00:10:11.844 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.103 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.103 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.103 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:12.103 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.103 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.103 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.103 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.362 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:12.362 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:12.981 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.981 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:12.981 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.981 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.251 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.251 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:13.251 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:13.251 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:13.251 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:13.251 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.251 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:13.252 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:13.252 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:13.252 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.252 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.252 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.252 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.510 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.510 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.510 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.511 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.769 00:10:13.769 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:13.769 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.769 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:14.028 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.028 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.028 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.028 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.028 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.028 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:14.028 { 00:10:14.028 "cntlid": 27, 00:10:14.028 "qid": 0, 00:10:14.028 "state": "enabled", 00:10:14.028 "thread": "nvmf_tgt_poll_group_000", 00:10:14.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:14.028 "listen_address": { 00:10:14.028 "trtype": "TCP", 00:10:14.028 "adrfam": "IPv4", 00:10:14.028 "traddr": "10.0.0.3", 00:10:14.028 "trsvcid": "4420" 00:10:14.029 }, 00:10:14.029 "peer_address": { 00:10:14.029 "trtype": "TCP", 00:10:14.029 "adrfam": "IPv4", 00:10:14.029 "traddr": "10.0.0.1", 00:10:14.029 "trsvcid": "44140" 00:10:14.029 }, 00:10:14.029 "auth": { 00:10:14.029 "state": "completed", 00:10:14.029 "digest": "sha256", 00:10:14.029 "dhgroup": "ffdhe4096" 00:10:14.029 } 00:10:14.029 } 00:10:14.029 ]' 00:10:14.029 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:14.029 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.029 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:14.029 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:14.288 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:14.288 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.288 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.288 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.546 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:14.546 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:15.114 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.114 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:15.114 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.114 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.114 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.114 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:15.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:15.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:15.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:15.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:15.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:15.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:15.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:15.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.374 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.374 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.374 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.374 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.374 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.374 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.374 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.941 00:10:15.941 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.941 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.941 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:16.200 { 00:10:16.200 "cntlid": 29, 00:10:16.200 "qid": 0, 00:10:16.200 "state": "enabled", 00:10:16.200 "thread": "nvmf_tgt_poll_group_000", 00:10:16.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:16.200 "listen_address": { 00:10:16.200 "trtype": "TCP", 00:10:16.200 "adrfam": "IPv4", 00:10:16.200 "traddr": "10.0.0.3", 00:10:16.200 "trsvcid": "4420" 00:10:16.200 }, 00:10:16.200 "peer_address": { 00:10:16.200 "trtype": "TCP", 00:10:16.200 "adrfam": "IPv4", 00:10:16.200 "traddr": "10.0.0.1", 00:10:16.200 "trsvcid": "44158" 00:10:16.200 }, 00:10:16.200 "auth": { 00:10:16.200 "state": "completed", 00:10:16.200 "digest": "sha256", 00:10:16.200 "dhgroup": "ffdhe4096" 00:10:16.200 } 00:10:16.200 } 00:10:16.200 ]' 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.200 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.459 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:16.459 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:17.395 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.395 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:17.395 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.395 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.395 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.395 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:17.395 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:17.395 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:17.653 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:17.653 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.653 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:17.653 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:17.653 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:17.653 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.653 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:10:17.654 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.654 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.654 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.654 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:17.654 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:17.654 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:17.912 00:10:17.912 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.912 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.912 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:18.170 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.170 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.170 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.170 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.428 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.428 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.428 { 00:10:18.428 "cntlid": 31, 00:10:18.428 "qid": 0, 00:10:18.428 "state": "enabled", 00:10:18.428 "thread": "nvmf_tgt_poll_group_000", 00:10:18.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:18.428 "listen_address": { 00:10:18.428 "trtype": "TCP", 00:10:18.428 "adrfam": "IPv4", 00:10:18.428 "traddr": "10.0.0.3", 00:10:18.428 "trsvcid": "4420" 00:10:18.428 }, 00:10:18.428 "peer_address": { 00:10:18.428 "trtype": "TCP", 00:10:18.428 "adrfam": "IPv4", 00:10:18.428 "traddr": "10.0.0.1", 00:10:18.428 "trsvcid": "44194" 00:10:18.428 }, 00:10:18.428 "auth": { 00:10:18.428 "state": "completed", 00:10:18.428 "digest": "sha256", 00:10:18.428 "dhgroup": "ffdhe4096" 00:10:18.428 } 00:10:18.428 } 00:10:18.428 ]' 00:10:18.428 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.428 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.428 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.428 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:18.428 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.428 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.428 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.428 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.686 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:18.687 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:19.621 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.621 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:19.621 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.621 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.621 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.621 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:19.621 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.621 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:19.621 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.879 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.445 00:10:20.445 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:20.445 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.445 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.704 { 00:10:20.704 "cntlid": 33, 00:10:20.704 "qid": 0, 00:10:20.704 "state": "enabled", 00:10:20.704 "thread": "nvmf_tgt_poll_group_000", 00:10:20.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:20.704 "listen_address": { 00:10:20.704 "trtype": "TCP", 00:10:20.704 "adrfam": "IPv4", 00:10:20.704 "traddr": "10.0.0.3", 00:10:20.704 "trsvcid": "4420" 00:10:20.704 }, 00:10:20.704 "peer_address": { 00:10:20.704 "trtype": "TCP", 00:10:20.704 "adrfam": "IPv4", 00:10:20.704 "traddr": "10.0.0.1", 00:10:20.704 "trsvcid": "44214" 00:10:20.704 }, 00:10:20.704 "auth": { 00:10:20.704 "state": "completed", 00:10:20.704 "digest": "sha256", 00:10:20.704 "dhgroup": "ffdhe6144" 00:10:20.704 } 00:10:20.704 } 00:10:20.704 ]' 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.704 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.269 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:21.269 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:21.835 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.835 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:21.835 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.835 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.835 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.835 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.835 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:21.835 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.093 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.658 00:10:22.658 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.658 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.658 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.917 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.917 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.917 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.917 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.917 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.917 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.917 { 00:10:22.917 "cntlid": 35, 00:10:22.917 "qid": 0, 00:10:22.917 "state": "enabled", 00:10:22.917 "thread": "nvmf_tgt_poll_group_000", 00:10:22.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:22.917 "listen_address": { 00:10:22.917 "trtype": "TCP", 00:10:22.917 "adrfam": "IPv4", 00:10:22.917 "traddr": "10.0.0.3", 00:10:22.917 "trsvcid": "4420" 00:10:22.917 }, 00:10:22.917 "peer_address": { 00:10:22.917 "trtype": "TCP", 00:10:22.917 "adrfam": "IPv4", 00:10:22.917 "traddr": "10.0.0.1", 00:10:22.917 "trsvcid": "44242" 00:10:22.917 }, 00:10:22.917 "auth": { 00:10:22.917 "state": "completed", 00:10:22.917 "digest": "sha256", 00:10:22.917 "dhgroup": "ffdhe6144" 00:10:22.917 } 00:10:22.917 } 00:10:22.917 ]' 00:10:22.917 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:22.917 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.917 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:23.175 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:23.175 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:23.175 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.175 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.176 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.433 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:23.433 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:24.367 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.367 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:24.367 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.367 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.367 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.367 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.367 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:24.367 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.625 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.191 00:10:25.191 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:25.191 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.191 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.449 { 00:10:25.449 "cntlid": 37, 00:10:25.449 "qid": 0, 00:10:25.449 "state": "enabled", 00:10:25.449 "thread": "nvmf_tgt_poll_group_000", 00:10:25.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:25.449 "listen_address": { 00:10:25.449 "trtype": "TCP", 00:10:25.449 "adrfam": "IPv4", 00:10:25.449 "traddr": "10.0.0.3", 00:10:25.449 "trsvcid": "4420" 00:10:25.449 }, 00:10:25.449 "peer_address": { 00:10:25.449 "trtype": "TCP", 00:10:25.449 "adrfam": "IPv4", 00:10:25.449 "traddr": "10.0.0.1", 00:10:25.449 "trsvcid": "49220" 00:10:25.449 }, 00:10:25.449 "auth": { 00:10:25.449 "state": "completed", 00:10:25.449 "digest": "sha256", 00:10:25.449 "dhgroup": "ffdhe6144" 00:10:25.449 } 00:10:25.449 } 00:10:25.449 ]' 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.449 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.015 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:26.015 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:26.581 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.581 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:26.581 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.581 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.581 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.581 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.581 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:26.581 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:26.839 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:27.406 00:10:27.406 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:27.406 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:27.406 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.665 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.665 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.665 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.665 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.665 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.665 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.665 { 00:10:27.665 "cntlid": 39, 00:10:27.665 "qid": 0, 00:10:27.665 "state": "enabled", 00:10:27.665 "thread": "nvmf_tgt_poll_group_000", 00:10:27.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:27.665 "listen_address": { 00:10:27.665 "trtype": "TCP", 00:10:27.665 "adrfam": "IPv4", 00:10:27.665 "traddr": "10.0.0.3", 00:10:27.665 "trsvcid": "4420" 00:10:27.665 }, 00:10:27.665 "peer_address": { 00:10:27.665 "trtype": "TCP", 00:10:27.665 "adrfam": "IPv4", 00:10:27.665 "traddr": "10.0.0.1", 00:10:27.665 "trsvcid": "49236" 00:10:27.665 }, 00:10:27.665 "auth": { 00:10:27.665 "state": "completed", 00:10:27.665 "digest": "sha256", 00:10:27.665 "dhgroup": "ffdhe6144" 00:10:27.665 } 00:10:27.665 } 00:10:27.665 ]' 00:10:27.665 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.665 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.665 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.924 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:27.924 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.924 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.924 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.924 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.183 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:28.183 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:28.752 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.752 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:28.752 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.752 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.752 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.752 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:28.752 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.752 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:28.752 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.011 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.579 00:10:29.579 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:29.579 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.579 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.839 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.839 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.839 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.839 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.839 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.839 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.839 { 00:10:29.839 "cntlid": 41, 00:10:29.839 "qid": 0, 00:10:29.839 "state": "enabled", 00:10:29.839 "thread": "nvmf_tgt_poll_group_000", 00:10:29.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:29.839 "listen_address": { 00:10:29.839 "trtype": "TCP", 00:10:29.839 "adrfam": "IPv4", 00:10:29.839 "traddr": "10.0.0.3", 00:10:29.839 "trsvcid": "4420" 00:10:29.839 }, 00:10:29.839 "peer_address": { 00:10:29.839 "trtype": "TCP", 00:10:29.839 "adrfam": "IPv4", 00:10:29.839 "traddr": "10.0.0.1", 00:10:29.839 "trsvcid": "49266" 00:10:29.839 }, 00:10:29.839 "auth": { 00:10:29.839 "state": "completed", 00:10:29.839 "digest": "sha256", 00:10:29.839 "dhgroup": "ffdhe8192" 00:10:29.839 } 00:10:29.839 } 00:10:29.839 ]' 00:10:29.839 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.839 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.839 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:30.098 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:30.098 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:30.098 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.098 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.098 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.357 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:30.357 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:30.924 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.924 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:30.924 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.924 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.924 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.924 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.924 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:30.924 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.182 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.115 00:10:32.115 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.115 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.115 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.115 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.115 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.115 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.115 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.373 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.374 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:32.374 { 00:10:32.374 "cntlid": 43, 00:10:32.374 "qid": 0, 00:10:32.374 "state": "enabled", 00:10:32.374 "thread": "nvmf_tgt_poll_group_000", 00:10:32.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:32.374 "listen_address": { 00:10:32.374 "trtype": "TCP", 00:10:32.374 "adrfam": "IPv4", 00:10:32.374 "traddr": "10.0.0.3", 00:10:32.374 "trsvcid": "4420" 00:10:32.374 }, 00:10:32.374 "peer_address": { 00:10:32.374 "trtype": "TCP", 00:10:32.374 "adrfam": "IPv4", 00:10:32.374 "traddr": "10.0.0.1", 00:10:32.374 "trsvcid": "49296" 00:10:32.374 }, 00:10:32.374 "auth": { 00:10:32.374 "state": "completed", 00:10:32.374 "digest": "sha256", 00:10:32.374 "dhgroup": "ffdhe8192" 00:10:32.374 } 00:10:32.374 } 00:10:32.374 ]' 00:10:32.374 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:32.374 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.374 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.374 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:32.374 10:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.374 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.374 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.374 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.632 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:32.632 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:33.567 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.567 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:33.567 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.567 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.567 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.567 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.567 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:33.567 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.567 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.501 00:10:34.501 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.501 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.501 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.759 { 00:10:34.759 "cntlid": 45, 00:10:34.759 "qid": 0, 00:10:34.759 "state": "enabled", 00:10:34.759 "thread": "nvmf_tgt_poll_group_000", 00:10:34.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:34.759 "listen_address": { 00:10:34.759 "trtype": "TCP", 00:10:34.759 "adrfam": "IPv4", 00:10:34.759 "traddr": "10.0.0.3", 00:10:34.759 "trsvcid": "4420" 00:10:34.759 }, 00:10:34.759 "peer_address": { 00:10:34.759 "trtype": "TCP", 00:10:34.759 "adrfam": "IPv4", 00:10:34.759 "traddr": "10.0.0.1", 00:10:34.759 "trsvcid": "41970" 00:10:34.759 }, 00:10:34.759 "auth": { 00:10:34.759 "state": "completed", 00:10:34.759 "digest": "sha256", 00:10:34.759 "dhgroup": "ffdhe8192" 00:10:34.759 } 00:10:34.759 } 00:10:34.759 ]' 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.759 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.017 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:35.017 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:35.951 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.951 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:35.951 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.951 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.951 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.951 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.951 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:35.951 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:36.209 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:36.791 00:10:36.791 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:36.791 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:36.791 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.091 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.091 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.091 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.091 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.091 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.091 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.091 { 00:10:37.091 "cntlid": 47, 00:10:37.091 "qid": 0, 00:10:37.091 "state": "enabled", 00:10:37.091 "thread": "nvmf_tgt_poll_group_000", 00:10:37.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:37.091 "listen_address": { 00:10:37.091 "trtype": "TCP", 00:10:37.091 "adrfam": "IPv4", 00:10:37.091 "traddr": "10.0.0.3", 00:10:37.091 "trsvcid": "4420" 00:10:37.091 }, 00:10:37.091 "peer_address": { 00:10:37.091 "trtype": "TCP", 00:10:37.091 "adrfam": "IPv4", 00:10:37.091 "traddr": "10.0.0.1", 00:10:37.091 "trsvcid": "41990" 00:10:37.091 }, 00:10:37.091 "auth": { 00:10:37.091 "state": "completed", 00:10:37.091 "digest": "sha256", 00:10:37.091 "dhgroup": "ffdhe8192" 00:10:37.091 } 00:10:37.091 } 00:10:37.091 ]' 00:10:37.091 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.091 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.091 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.374 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:37.374 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.374 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.374 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.374 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.632 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:37.632 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:38.198 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.198 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:38.198 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.198 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.198 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.198 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:38.198 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:38.198 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.198 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:38.198 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:38.765 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:38.765 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:38.765 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:38.765 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:38.765 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:38.765 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.765 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.765 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.765 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.765 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.765 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.766 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.766 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.024 00:10:39.025 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.025 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.025 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.283 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.283 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.283 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.283 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.283 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.283 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.283 { 00:10:39.283 "cntlid": 49, 00:10:39.283 "qid": 0, 00:10:39.283 "state": "enabled", 00:10:39.283 "thread": "nvmf_tgt_poll_group_000", 00:10:39.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:39.283 "listen_address": { 00:10:39.283 "trtype": "TCP", 00:10:39.283 "adrfam": "IPv4", 00:10:39.283 "traddr": "10.0.0.3", 00:10:39.283 "trsvcid": "4420" 00:10:39.283 }, 00:10:39.284 "peer_address": { 00:10:39.284 "trtype": "TCP", 00:10:39.284 "adrfam": "IPv4", 00:10:39.284 "traddr": "10.0.0.1", 00:10:39.284 "trsvcid": "42010" 00:10:39.284 }, 00:10:39.284 "auth": { 00:10:39.284 "state": "completed", 00:10:39.284 "digest": "sha384", 00:10:39.284 "dhgroup": "null" 00:10:39.284 } 00:10:39.284 } 00:10:39.284 ]' 00:10:39.284 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.284 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:39.284 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.284 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:39.284 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.284 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.284 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.284 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.851 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:39.851 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:40.419 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.419 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:40.419 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.419 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.419 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.419 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.419 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:40.419 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.678 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.938 00:10:40.938 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.938 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.938 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:41.197 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.197 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.197 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.197 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.197 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.197 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.197 { 00:10:41.197 "cntlid": 51, 00:10:41.197 "qid": 0, 00:10:41.197 "state": "enabled", 00:10:41.197 "thread": "nvmf_tgt_poll_group_000", 00:10:41.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:41.197 "listen_address": { 00:10:41.197 "trtype": "TCP", 00:10:41.197 "adrfam": "IPv4", 00:10:41.197 "traddr": "10.0.0.3", 00:10:41.197 "trsvcid": "4420" 00:10:41.197 }, 00:10:41.197 "peer_address": { 00:10:41.197 "trtype": "TCP", 00:10:41.197 "adrfam": "IPv4", 00:10:41.197 "traddr": "10.0.0.1", 00:10:41.197 "trsvcid": "42048" 00:10:41.197 }, 00:10:41.197 "auth": { 00:10:41.197 "state": "completed", 00:10:41.197 "digest": "sha384", 00:10:41.197 "dhgroup": "null" 00:10:41.197 } 00:10:41.197 } 00:10:41.197 ]' 00:10:41.197 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.197 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:41.197 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.456 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:41.456 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.456 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.456 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.456 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.715 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:41.715 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:42.285 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.285 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:42.285 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.285 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.285 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.285 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.285 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:42.285 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.545 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.804 00:10:42.804 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.804 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.804 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.063 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.063 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.063 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.063 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.063 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.063 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.063 { 00:10:43.063 "cntlid": 53, 00:10:43.063 "qid": 0, 00:10:43.063 "state": "enabled", 00:10:43.063 "thread": "nvmf_tgt_poll_group_000", 00:10:43.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:43.063 "listen_address": { 00:10:43.063 "trtype": "TCP", 00:10:43.063 "adrfam": "IPv4", 00:10:43.063 "traddr": "10.0.0.3", 00:10:43.063 "trsvcid": "4420" 00:10:43.063 }, 00:10:43.063 "peer_address": { 00:10:43.063 "trtype": "TCP", 00:10:43.063 "adrfam": "IPv4", 00:10:43.063 "traddr": "10.0.0.1", 00:10:43.063 "trsvcid": "42072" 00:10:43.063 }, 00:10:43.063 "auth": { 00:10:43.063 "state": "completed", 00:10:43.063 "digest": "sha384", 00:10:43.063 "dhgroup": "null" 00:10:43.063 } 00:10:43.063 } 00:10:43.063 ]' 00:10:43.063 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.063 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:43.063 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.064 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:43.064 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.323 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.324 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.324 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.583 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:43.583 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:44.153 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.153 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:44.153 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.153 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.153 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.153 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.153 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:44.153 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:44.412 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:44.412 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.412 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:44.412 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:44.412 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:44.412 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.412 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:10:44.413 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.413 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.413 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.413 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:44.413 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:44.413 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:44.981 00:10:44.981 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.981 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.981 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.981 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.981 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.981 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.981 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.241 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.241 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.241 { 00:10:45.241 "cntlid": 55, 00:10:45.241 "qid": 0, 00:10:45.241 "state": "enabled", 00:10:45.241 "thread": "nvmf_tgt_poll_group_000", 00:10:45.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:45.241 "listen_address": { 00:10:45.241 "trtype": "TCP", 00:10:45.241 "adrfam": "IPv4", 00:10:45.241 "traddr": "10.0.0.3", 00:10:45.241 "trsvcid": "4420" 00:10:45.241 }, 00:10:45.241 "peer_address": { 00:10:45.241 "trtype": "TCP", 00:10:45.241 "adrfam": "IPv4", 00:10:45.241 "traddr": "10.0.0.1", 00:10:45.241 "trsvcid": "40466" 00:10:45.241 }, 00:10:45.241 "auth": { 00:10:45.241 "state": "completed", 00:10:45.241 "digest": "sha384", 00:10:45.241 "dhgroup": "null" 00:10:45.241 } 00:10:45.241 } 00:10:45.241 ]' 00:10:45.241 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.241 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:45.241 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.241 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:45.241 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.241 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.241 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.241 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.502 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:45.502 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:46.072 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.072 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:46.072 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.072 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.072 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.072 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:46.072 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.072 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:46.072 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.642 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.901 00:10:46.901 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.901 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.901 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.161 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.161 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.161 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.161 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.161 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.161 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:47.161 { 00:10:47.161 "cntlid": 57, 00:10:47.161 "qid": 0, 00:10:47.161 "state": "enabled", 00:10:47.161 "thread": "nvmf_tgt_poll_group_000", 00:10:47.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:47.161 "listen_address": { 00:10:47.161 "trtype": "TCP", 00:10:47.161 "adrfam": "IPv4", 00:10:47.161 "traddr": "10.0.0.3", 00:10:47.161 "trsvcid": "4420" 00:10:47.161 }, 00:10:47.161 "peer_address": { 00:10:47.161 "trtype": "TCP", 00:10:47.161 "adrfam": "IPv4", 00:10:47.161 "traddr": "10.0.0.1", 00:10:47.161 "trsvcid": "40480" 00:10:47.161 }, 00:10:47.161 "auth": { 00:10:47.161 "state": "completed", 00:10:47.161 "digest": "sha384", 00:10:47.161 "dhgroup": "ffdhe2048" 00:10:47.161 } 00:10:47.161 } 00:10:47.161 ]' 00:10:47.161 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.161 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:47.162 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.162 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:47.162 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.162 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.162 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.162 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.731 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:47.731 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:48.302 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.302 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:48.302 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.302 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.302 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.302 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.302 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:48.302 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.561 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.820 00:10:48.820 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.820 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.820 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.080 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.080 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.080 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.080 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.080 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.080 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.080 { 00:10:49.080 "cntlid": 59, 00:10:49.080 "qid": 0, 00:10:49.080 "state": "enabled", 00:10:49.080 "thread": "nvmf_tgt_poll_group_000", 00:10:49.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:49.080 "listen_address": { 00:10:49.080 "trtype": "TCP", 00:10:49.080 "adrfam": "IPv4", 00:10:49.080 "traddr": "10.0.0.3", 00:10:49.080 "trsvcid": "4420" 00:10:49.080 }, 00:10:49.080 "peer_address": { 00:10:49.080 "trtype": "TCP", 00:10:49.080 "adrfam": "IPv4", 00:10:49.080 "traddr": "10.0.0.1", 00:10:49.080 "trsvcid": "40514" 00:10:49.080 }, 00:10:49.080 "auth": { 00:10:49.080 "state": "completed", 00:10:49.080 "digest": "sha384", 00:10:49.080 "dhgroup": "ffdhe2048" 00:10:49.080 } 00:10:49.080 } 00:10:49.080 ]' 00:10:49.080 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.339 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:49.339 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.339 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:49.339 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.339 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.339 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.339 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.598 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:49.598 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:50.166 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.166 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:50.166 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.166 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.166 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.166 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.166 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:50.166 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.425 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.994 00:10:50.994 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.994 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.994 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.994 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.256 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.256 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.256 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.256 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.257 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.257 { 00:10:51.257 "cntlid": 61, 00:10:51.257 "qid": 0, 00:10:51.257 "state": "enabled", 00:10:51.257 "thread": "nvmf_tgt_poll_group_000", 00:10:51.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:51.257 "listen_address": { 00:10:51.257 "trtype": "TCP", 00:10:51.257 "adrfam": "IPv4", 00:10:51.257 "traddr": "10.0.0.3", 00:10:51.257 "trsvcid": "4420" 00:10:51.257 }, 00:10:51.257 "peer_address": { 00:10:51.257 "trtype": "TCP", 00:10:51.257 "adrfam": "IPv4", 00:10:51.257 "traddr": "10.0.0.1", 00:10:51.257 "trsvcid": "40538" 00:10:51.257 }, 00:10:51.257 "auth": { 00:10:51.257 "state": "completed", 00:10:51.257 "digest": "sha384", 00:10:51.257 "dhgroup": "ffdhe2048" 00:10:51.257 } 00:10:51.257 } 00:10:51.257 ]' 00:10:51.257 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.257 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:51.257 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.257 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:51.257 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.257 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.257 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.257 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.519 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:51.519 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:52.087 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.087 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:52.087 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.087 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.087 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.087 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.087 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:52.087 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:52.655 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:52.914 00:10:52.914 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:52.914 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:52.914 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.173 { 00:10:53.173 "cntlid": 63, 00:10:53.173 "qid": 0, 00:10:53.173 "state": "enabled", 00:10:53.173 "thread": "nvmf_tgt_poll_group_000", 00:10:53.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:53.173 "listen_address": { 00:10:53.173 "trtype": "TCP", 00:10:53.173 "adrfam": "IPv4", 00:10:53.173 "traddr": "10.0.0.3", 00:10:53.173 "trsvcid": "4420" 00:10:53.173 }, 00:10:53.173 "peer_address": { 00:10:53.173 "trtype": "TCP", 00:10:53.173 "adrfam": "IPv4", 00:10:53.173 "traddr": "10.0.0.1", 00:10:53.173 "trsvcid": "40578" 00:10:53.173 }, 00:10:53.173 "auth": { 00:10:53.173 "state": "completed", 00:10:53.173 "digest": "sha384", 00:10:53.173 "dhgroup": "ffdhe2048" 00:10:53.173 } 00:10:53.173 } 00:10:53.173 ]' 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.173 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.741 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:53.741 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:10:54.310 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.310 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:54.310 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.310 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.310 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.310 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:54.310 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.310 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:54.310 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.569 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.829 00:10:54.829 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:54.829 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:54.829 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.088 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.088 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.088 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.088 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.088 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.088 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.088 { 00:10:55.088 "cntlid": 65, 00:10:55.088 "qid": 0, 00:10:55.088 "state": "enabled", 00:10:55.088 "thread": "nvmf_tgt_poll_group_000", 00:10:55.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:55.088 "listen_address": { 00:10:55.088 "trtype": "TCP", 00:10:55.088 "adrfam": "IPv4", 00:10:55.088 "traddr": "10.0.0.3", 00:10:55.088 "trsvcid": "4420" 00:10:55.088 }, 00:10:55.088 "peer_address": { 00:10:55.088 "trtype": "TCP", 00:10:55.088 "adrfam": "IPv4", 00:10:55.088 "traddr": "10.0.0.1", 00:10:55.088 "trsvcid": "59010" 00:10:55.088 }, 00:10:55.088 "auth": { 00:10:55.088 "state": "completed", 00:10:55.088 "digest": "sha384", 00:10:55.088 "dhgroup": "ffdhe3072" 00:10:55.088 } 00:10:55.088 } 00:10:55.088 ]' 00:10:55.088 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.348 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:55.348 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.348 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:55.348 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.348 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.348 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.348 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.607 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:55.607 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:10:56.175 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.175 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:56.175 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.175 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.175 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.175 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.175 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:56.175 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:56.433 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:10:56.433 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.433 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:56.433 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:56.433 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:56.433 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.433 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.433 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.433 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.433 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.433 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.433 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.434 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.002 00:10:57.002 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.002 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.002 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.261 { 00:10:57.261 "cntlid": 67, 00:10:57.261 "qid": 0, 00:10:57.261 "state": "enabled", 00:10:57.261 "thread": "nvmf_tgt_poll_group_000", 00:10:57.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:57.261 "listen_address": { 00:10:57.261 "trtype": "TCP", 00:10:57.261 "adrfam": "IPv4", 00:10:57.261 "traddr": "10.0.0.3", 00:10:57.261 "trsvcid": "4420" 00:10:57.261 }, 00:10:57.261 "peer_address": { 00:10:57.261 "trtype": "TCP", 00:10:57.261 "adrfam": "IPv4", 00:10:57.261 "traddr": "10.0.0.1", 00:10:57.261 "trsvcid": "59040" 00:10:57.261 }, 00:10:57.261 "auth": { 00:10:57.261 "state": "completed", 00:10:57.261 "digest": "sha384", 00:10:57.261 "dhgroup": "ffdhe3072" 00:10:57.261 } 00:10:57.261 } 00:10:57.261 ]' 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.261 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.520 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:57.520 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:10:58.105 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.105 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:10:58.106 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.106 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.106 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.106 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:58.106 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:58.106 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.419 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.706 00:10:58.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:58.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:58.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.975 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.235 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.235 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.235 { 00:10:59.235 "cntlid": 69, 00:10:59.235 "qid": 0, 00:10:59.235 "state": "enabled", 00:10:59.235 "thread": "nvmf_tgt_poll_group_000", 00:10:59.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:10:59.235 "listen_address": { 00:10:59.235 "trtype": "TCP", 00:10:59.235 "adrfam": "IPv4", 00:10:59.235 "traddr": "10.0.0.3", 00:10:59.235 "trsvcid": "4420" 00:10:59.235 }, 00:10:59.235 "peer_address": { 00:10:59.235 "trtype": "TCP", 00:10:59.235 "adrfam": "IPv4", 00:10:59.235 "traddr": "10.0.0.1", 00:10:59.235 "trsvcid": "59062" 00:10:59.235 }, 00:10:59.235 "auth": { 00:10:59.235 "state": "completed", 00:10:59.235 "digest": "sha384", 00:10:59.235 "dhgroup": "ffdhe3072" 00:10:59.235 } 00:10:59.235 } 00:10:59.235 ]' 00:10:59.235 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.235 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:59.235 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:59.235 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:59.235 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:59.235 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.235 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.235 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.495 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:10:59.495 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:00.432 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.432 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:00.432 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.432 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.432 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.432 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.432 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:00.432 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:00.432 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:00.691 00:11:00.951 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.951 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.951 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.951 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.951 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.951 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.951 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.951 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.951 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.951 { 00:11:00.951 "cntlid": 71, 00:11:00.951 "qid": 0, 00:11:00.951 "state": "enabled", 00:11:00.951 "thread": "nvmf_tgt_poll_group_000", 00:11:00.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:00.951 "listen_address": { 00:11:00.951 "trtype": "TCP", 00:11:00.951 "adrfam": "IPv4", 00:11:00.951 "traddr": "10.0.0.3", 00:11:00.951 "trsvcid": "4420" 00:11:00.951 }, 00:11:00.951 "peer_address": { 00:11:00.951 "trtype": "TCP", 00:11:00.951 "adrfam": "IPv4", 00:11:00.951 "traddr": "10.0.0.1", 00:11:00.951 "trsvcid": "59078" 00:11:00.951 }, 00:11:00.951 "auth": { 00:11:00.951 "state": "completed", 00:11:00.951 "digest": "sha384", 00:11:00.951 "dhgroup": "ffdhe3072" 00:11:00.951 } 00:11:00.951 } 00:11:00.951 ]' 00:11:00.951 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.210 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:01.210 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.210 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:01.210 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.210 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.210 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.210 10:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.469 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:01.469 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:02.037 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.037 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:02.037 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.037 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.037 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.037 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:02.037 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.037 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:02.037 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.297 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.865 00:11:02.865 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.865 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.865 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.124 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.124 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.124 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.124 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.124 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.124 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.124 { 00:11:03.124 "cntlid": 73, 00:11:03.124 "qid": 0, 00:11:03.124 "state": "enabled", 00:11:03.124 "thread": "nvmf_tgt_poll_group_000", 00:11:03.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:03.124 "listen_address": { 00:11:03.124 "trtype": "TCP", 00:11:03.124 "adrfam": "IPv4", 00:11:03.124 "traddr": "10.0.0.3", 00:11:03.124 "trsvcid": "4420" 00:11:03.124 }, 00:11:03.124 "peer_address": { 00:11:03.124 "trtype": "TCP", 00:11:03.124 "adrfam": "IPv4", 00:11:03.124 "traddr": "10.0.0.1", 00:11:03.124 "trsvcid": "59104" 00:11:03.124 }, 00:11:03.124 "auth": { 00:11:03.124 "state": "completed", 00:11:03.124 "digest": "sha384", 00:11:03.124 "dhgroup": "ffdhe4096" 00:11:03.124 } 00:11:03.124 } 00:11:03.124 ]' 00:11:03.124 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.124 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:03.124 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.124 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:03.124 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.383 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.383 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.383 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.641 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:03.641 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:04.210 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.210 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:04.210 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.210 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.210 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.210 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.210 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:04.210 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.469 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.037 00:11:05.037 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.037 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.037 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.297 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.297 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.297 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.297 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.297 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.297 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.297 { 00:11:05.297 "cntlid": 75, 00:11:05.297 "qid": 0, 00:11:05.297 "state": "enabled", 00:11:05.297 "thread": "nvmf_tgt_poll_group_000", 00:11:05.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:05.297 "listen_address": { 00:11:05.297 "trtype": "TCP", 00:11:05.297 "adrfam": "IPv4", 00:11:05.297 "traddr": "10.0.0.3", 00:11:05.297 "trsvcid": "4420" 00:11:05.297 }, 00:11:05.297 "peer_address": { 00:11:05.297 "trtype": "TCP", 00:11:05.297 "adrfam": "IPv4", 00:11:05.297 "traddr": "10.0.0.1", 00:11:05.297 "trsvcid": "34950" 00:11:05.297 }, 00:11:05.297 "auth": { 00:11:05.297 "state": "completed", 00:11:05.297 "digest": "sha384", 00:11:05.297 "dhgroup": "ffdhe4096" 00:11:05.297 } 00:11:05.297 } 00:11:05.297 ]' 00:11:05.297 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.297 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:05.297 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.297 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:05.297 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.297 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.297 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.297 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.557 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:05.557 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:06.125 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.125 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:06.125 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.125 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.125 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.125 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.125 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:06.125 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.693 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.952 00:11:06.952 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.952 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.952 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.211 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.211 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.211 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.211 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.211 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.211 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.211 { 00:11:07.211 "cntlid": 77, 00:11:07.211 "qid": 0, 00:11:07.211 "state": "enabled", 00:11:07.211 "thread": "nvmf_tgt_poll_group_000", 00:11:07.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:07.211 "listen_address": { 00:11:07.211 "trtype": "TCP", 00:11:07.211 "adrfam": "IPv4", 00:11:07.211 "traddr": "10.0.0.3", 00:11:07.211 "trsvcid": "4420" 00:11:07.211 }, 00:11:07.211 "peer_address": { 00:11:07.211 "trtype": "TCP", 00:11:07.211 "adrfam": "IPv4", 00:11:07.211 "traddr": "10.0.0.1", 00:11:07.211 "trsvcid": "34976" 00:11:07.211 }, 00:11:07.211 "auth": { 00:11:07.211 "state": "completed", 00:11:07.211 "digest": "sha384", 00:11:07.211 "dhgroup": "ffdhe4096" 00:11:07.211 } 00:11:07.211 } 00:11:07.211 ]' 00:11:07.211 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.211 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.211 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.211 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:07.211 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.471 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.471 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.471 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.730 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:07.730 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:08.299 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.299 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:08.299 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.299 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.299 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.299 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.299 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:08.299 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:08.558 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:09.126 00:11:09.126 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.126 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.126 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:09.385 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.385 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.385 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.385 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.385 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.385 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.385 { 00:11:09.385 "cntlid": 79, 00:11:09.385 "qid": 0, 00:11:09.385 "state": "enabled", 00:11:09.385 "thread": "nvmf_tgt_poll_group_000", 00:11:09.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:09.385 "listen_address": { 00:11:09.385 "trtype": "TCP", 00:11:09.385 "adrfam": "IPv4", 00:11:09.385 "traddr": "10.0.0.3", 00:11:09.385 "trsvcid": "4420" 00:11:09.385 }, 00:11:09.385 "peer_address": { 00:11:09.385 "trtype": "TCP", 00:11:09.385 "adrfam": "IPv4", 00:11:09.386 "traddr": "10.0.0.1", 00:11:09.386 "trsvcid": "35014" 00:11:09.386 }, 00:11:09.386 "auth": { 00:11:09.386 "state": "completed", 00:11:09.386 "digest": "sha384", 00:11:09.386 "dhgroup": "ffdhe4096" 00:11:09.386 } 00:11:09.386 } 00:11:09.386 ]' 00:11:09.386 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.386 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.386 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.386 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:09.386 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.386 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.386 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.386 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.645 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:09.645 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:10.213 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.213 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:10.213 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.213 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.213 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.213 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:10.213 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.213 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:10.213 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.783 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.041 00:11:11.041 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:11.041 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:11.041 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.299 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.299 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.299 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.299 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.299 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.299 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.299 { 00:11:11.299 "cntlid": 81, 00:11:11.299 "qid": 0, 00:11:11.299 "state": "enabled", 00:11:11.299 "thread": "nvmf_tgt_poll_group_000", 00:11:11.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:11.299 "listen_address": { 00:11:11.299 "trtype": "TCP", 00:11:11.299 "adrfam": "IPv4", 00:11:11.299 "traddr": "10.0.0.3", 00:11:11.299 "trsvcid": "4420" 00:11:11.299 }, 00:11:11.299 "peer_address": { 00:11:11.299 "trtype": "TCP", 00:11:11.299 "adrfam": "IPv4", 00:11:11.299 "traddr": "10.0.0.1", 00:11:11.299 "trsvcid": "35052" 00:11:11.299 }, 00:11:11.299 "auth": { 00:11:11.299 "state": "completed", 00:11:11.299 "digest": "sha384", 00:11:11.299 "dhgroup": "ffdhe6144" 00:11:11.299 } 00:11:11.299 } 00:11:11.299 ]' 00:11:11.299 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.300 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.300 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.300 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:11.300 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.558 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.558 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.559 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.818 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:11.818 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:12.386 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.387 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:12.387 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.387 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.387 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.387 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.387 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:12.387 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.646 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.906 00:11:13.165 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.165 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.165 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.165 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.165 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.165 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.165 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.425 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.425 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.425 { 00:11:13.425 "cntlid": 83, 00:11:13.425 "qid": 0, 00:11:13.425 "state": "enabled", 00:11:13.425 "thread": "nvmf_tgt_poll_group_000", 00:11:13.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:13.425 "listen_address": { 00:11:13.425 "trtype": "TCP", 00:11:13.425 "adrfam": "IPv4", 00:11:13.425 "traddr": "10.0.0.3", 00:11:13.425 "trsvcid": "4420" 00:11:13.425 }, 00:11:13.425 "peer_address": { 00:11:13.425 "trtype": "TCP", 00:11:13.425 "adrfam": "IPv4", 00:11:13.425 "traddr": "10.0.0.1", 00:11:13.425 "trsvcid": "35084" 00:11:13.425 }, 00:11:13.425 "auth": { 00:11:13.425 "state": "completed", 00:11:13.425 "digest": "sha384", 00:11:13.425 "dhgroup": "ffdhe6144" 00:11:13.425 } 00:11:13.425 } 00:11:13.425 ]' 00:11:13.425 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.425 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.425 10:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.425 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:13.425 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.425 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.425 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.425 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.684 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:13.684 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:14.253 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.253 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:14.253 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.253 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.253 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.253 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.253 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:14.253 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:14.519 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:14.519 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.519 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:14.519 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:14.519 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:14.519 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.519 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.519 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.519 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.519 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.519 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.519 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.520 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.088 00:11:15.088 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.088 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.088 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.347 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.347 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.347 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.347 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.347 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.347 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.347 { 00:11:15.347 "cntlid": 85, 00:11:15.347 "qid": 0, 00:11:15.347 "state": "enabled", 00:11:15.347 "thread": "nvmf_tgt_poll_group_000", 00:11:15.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:15.347 "listen_address": { 00:11:15.347 "trtype": "TCP", 00:11:15.347 "adrfam": "IPv4", 00:11:15.347 "traddr": "10.0.0.3", 00:11:15.347 "trsvcid": "4420" 00:11:15.347 }, 00:11:15.347 "peer_address": { 00:11:15.347 "trtype": "TCP", 00:11:15.347 "adrfam": "IPv4", 00:11:15.347 "traddr": "10.0.0.1", 00:11:15.347 "trsvcid": "58752" 00:11:15.347 }, 00:11:15.347 "auth": { 00:11:15.347 "state": "completed", 00:11:15.347 "digest": "sha384", 00:11:15.347 "dhgroup": "ffdhe6144" 00:11:15.347 } 00:11:15.347 } 00:11:15.347 ]' 00:11:15.347 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.347 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.347 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.347 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:15.347 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.606 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.606 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.606 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.865 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:15.865 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:16.433 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.433 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:16.433 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.433 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.433 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.434 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.434 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:16.434 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:16.693 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:17.006 00:11:17.006 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.006 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.006 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.280 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.280 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.280 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.280 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.539 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.539 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.539 { 00:11:17.539 "cntlid": 87, 00:11:17.539 "qid": 0, 00:11:17.539 "state": "enabled", 00:11:17.539 "thread": "nvmf_tgt_poll_group_000", 00:11:17.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:17.539 "listen_address": { 00:11:17.539 "trtype": "TCP", 00:11:17.539 "adrfam": "IPv4", 00:11:17.539 "traddr": "10.0.0.3", 00:11:17.539 "trsvcid": "4420" 00:11:17.539 }, 00:11:17.539 "peer_address": { 00:11:17.539 "trtype": "TCP", 00:11:17.539 "adrfam": "IPv4", 00:11:17.539 "traddr": "10.0.0.1", 00:11:17.539 "trsvcid": "58790" 00:11:17.539 }, 00:11:17.539 "auth": { 00:11:17.539 "state": "completed", 00:11:17.539 "digest": "sha384", 00:11:17.539 "dhgroup": "ffdhe6144" 00:11:17.539 } 00:11:17.539 } 00:11:17.539 ]' 00:11:17.539 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.539 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.539 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.539 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:17.539 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.539 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.539 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.539 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.798 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:17.798 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:18.367 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.367 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:18.367 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.367 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.367 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.367 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:18.367 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.367 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:18.367 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.626 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.195 00:11:19.195 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.195 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.195 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.455 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.455 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.455 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.455 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.714 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.714 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.714 { 00:11:19.714 "cntlid": 89, 00:11:19.714 "qid": 0, 00:11:19.714 "state": "enabled", 00:11:19.714 "thread": "nvmf_tgt_poll_group_000", 00:11:19.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:19.714 "listen_address": { 00:11:19.714 "trtype": "TCP", 00:11:19.714 "adrfam": "IPv4", 00:11:19.714 "traddr": "10.0.0.3", 00:11:19.714 "trsvcid": "4420" 00:11:19.714 }, 00:11:19.714 "peer_address": { 00:11:19.714 "trtype": "TCP", 00:11:19.714 "adrfam": "IPv4", 00:11:19.714 "traddr": "10.0.0.1", 00:11:19.714 "trsvcid": "58820" 00:11:19.714 }, 00:11:19.714 "auth": { 00:11:19.714 "state": "completed", 00:11:19.714 "digest": "sha384", 00:11:19.714 "dhgroup": "ffdhe8192" 00:11:19.715 } 00:11:19.715 } 00:11:19.715 ]' 00:11:19.715 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.715 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.715 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.715 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:19.715 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.715 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.715 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.715 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.973 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:19.973 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:20.541 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.541 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:20.541 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.541 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.541 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.541 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.541 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:20.541 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:20.799 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:20.799 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.799 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:20.799 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:20.799 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:20.799 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.800 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.800 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.800 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.800 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.800 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.800 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.800 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.372 00:11:21.372 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.372 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.372 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.630 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.630 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.630 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.630 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.630 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.630 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.630 { 00:11:21.630 "cntlid": 91, 00:11:21.630 "qid": 0, 00:11:21.630 "state": "enabled", 00:11:21.630 "thread": "nvmf_tgt_poll_group_000", 00:11:21.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:21.630 "listen_address": { 00:11:21.630 "trtype": "TCP", 00:11:21.630 "adrfam": "IPv4", 00:11:21.630 "traddr": "10.0.0.3", 00:11:21.630 "trsvcid": "4420" 00:11:21.630 }, 00:11:21.630 "peer_address": { 00:11:21.630 "trtype": "TCP", 00:11:21.630 "adrfam": "IPv4", 00:11:21.630 "traddr": "10.0.0.1", 00:11:21.630 "trsvcid": "58834" 00:11:21.630 }, 00:11:21.630 "auth": { 00:11:21.630 "state": "completed", 00:11:21.630 "digest": "sha384", 00:11:21.630 "dhgroup": "ffdhe8192" 00:11:21.630 } 00:11:21.631 } 00:11:21.631 ]' 00:11:21.631 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.631 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.631 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.889 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:21.889 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.889 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.889 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.889 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.148 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:22.148 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:22.716 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.716 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:22.716 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.716 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.716 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.716 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.716 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:22.716 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.975 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.543 00:11:23.543 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.543 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.543 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.802 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.802 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.802 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.802 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.802 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.802 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.802 { 00:11:23.802 "cntlid": 93, 00:11:23.802 "qid": 0, 00:11:23.802 "state": "enabled", 00:11:23.802 "thread": "nvmf_tgt_poll_group_000", 00:11:23.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:23.802 "listen_address": { 00:11:23.802 "trtype": "TCP", 00:11:23.802 "adrfam": "IPv4", 00:11:23.802 "traddr": "10.0.0.3", 00:11:23.802 "trsvcid": "4420" 00:11:23.802 }, 00:11:23.802 "peer_address": { 00:11:23.802 "trtype": "TCP", 00:11:23.802 "adrfam": "IPv4", 00:11:23.802 "traddr": "10.0.0.1", 00:11:23.802 "trsvcid": "53246" 00:11:23.802 }, 00:11:23.802 "auth": { 00:11:23.802 "state": "completed", 00:11:23.802 "digest": "sha384", 00:11:23.802 "dhgroup": "ffdhe8192" 00:11:23.802 } 00:11:23.802 } 00:11:23.802 ]' 00:11:23.802 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.802 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.802 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.061 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:24.061 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.061 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.061 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.061 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.319 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:24.319 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:24.886 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.886 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:24.886 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.886 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.886 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.886 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.886 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:24.886 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.145 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.713 00:11:25.713 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.713 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.713 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.971 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.971 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.971 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.971 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.971 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.971 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.971 { 00:11:25.971 "cntlid": 95, 00:11:25.971 "qid": 0, 00:11:25.971 "state": "enabled", 00:11:25.971 "thread": "nvmf_tgt_poll_group_000", 00:11:25.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:25.971 "listen_address": { 00:11:25.971 "trtype": "TCP", 00:11:25.971 "adrfam": "IPv4", 00:11:25.971 "traddr": "10.0.0.3", 00:11:25.971 "trsvcid": "4420" 00:11:25.971 }, 00:11:25.971 "peer_address": { 00:11:25.971 "trtype": "TCP", 00:11:25.971 "adrfam": "IPv4", 00:11:25.971 "traddr": "10.0.0.1", 00:11:25.971 "trsvcid": "53268" 00:11:25.971 }, 00:11:25.971 "auth": { 00:11:25.971 "state": "completed", 00:11:25.971 "digest": "sha384", 00:11:25.971 "dhgroup": "ffdhe8192" 00:11:25.971 } 00:11:25.971 } 00:11:25.971 ]' 00:11:25.971 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.971 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.971 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.231 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:26.231 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.231 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.231 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.231 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.489 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:26.489 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:27.056 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.056 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:27.056 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.056 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.056 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.056 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:27.056 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:27.056 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.056 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:27.056 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.315 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.574 00:11:27.574 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.574 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.574 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.833 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.833 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.833 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.833 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.833 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.833 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.833 { 00:11:27.833 "cntlid": 97, 00:11:27.833 "qid": 0, 00:11:27.833 "state": "enabled", 00:11:27.833 "thread": "nvmf_tgt_poll_group_000", 00:11:27.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:27.833 "listen_address": { 00:11:27.833 "trtype": "TCP", 00:11:27.833 "adrfam": "IPv4", 00:11:27.833 "traddr": "10.0.0.3", 00:11:27.833 "trsvcid": "4420" 00:11:27.833 }, 00:11:27.833 "peer_address": { 00:11:27.833 "trtype": "TCP", 00:11:27.833 "adrfam": "IPv4", 00:11:27.833 "traddr": "10.0.0.1", 00:11:27.833 "trsvcid": "53308" 00:11:27.833 }, 00:11:27.833 "auth": { 00:11:27.833 "state": "completed", 00:11:27.833 "digest": "sha512", 00:11:27.833 "dhgroup": "null" 00:11:27.833 } 00:11:27.833 } 00:11:27.833 ]' 00:11:27.833 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.091 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:28.091 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.091 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:28.091 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.091 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.091 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.091 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.350 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:28.350 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:28.918 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.918 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:28.918 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.918 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.918 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.918 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.918 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:28.918 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:29.176 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:29.176 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.176 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:29.177 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:29.177 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:29.177 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.177 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.177 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.177 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.177 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.177 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.177 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.177 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.435 00:11:29.435 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.435 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.435 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.694 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.694 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.694 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.694 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.694 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.694 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.694 { 00:11:29.694 "cntlid": 99, 00:11:29.694 "qid": 0, 00:11:29.694 "state": "enabled", 00:11:29.694 "thread": "nvmf_tgt_poll_group_000", 00:11:29.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:29.694 "listen_address": { 00:11:29.694 "trtype": "TCP", 00:11:29.694 "adrfam": "IPv4", 00:11:29.694 "traddr": "10.0.0.3", 00:11:29.694 "trsvcid": "4420" 00:11:29.694 }, 00:11:29.694 "peer_address": { 00:11:29.694 "trtype": "TCP", 00:11:29.694 "adrfam": "IPv4", 00:11:29.694 "traddr": "10.0.0.1", 00:11:29.694 "trsvcid": "53342" 00:11:29.694 }, 00:11:29.694 "auth": { 00:11:29.694 "state": "completed", 00:11:29.694 "digest": "sha512", 00:11:29.694 "dhgroup": "null" 00:11:29.694 } 00:11:29.694 } 00:11:29.694 ]' 00:11:29.694 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.694 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:29.694 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.694 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:29.694 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.953 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.953 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.953 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.211 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:30.211 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:30.778 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.778 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:30.778 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.778 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.778 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.778 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.778 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:30.778 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.035 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.292 00:11:31.292 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.292 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.292 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.551 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.551 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.551 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.551 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.551 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.551 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.551 { 00:11:31.551 "cntlid": 101, 00:11:31.551 "qid": 0, 00:11:31.551 "state": "enabled", 00:11:31.551 "thread": "nvmf_tgt_poll_group_000", 00:11:31.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:31.551 "listen_address": { 00:11:31.551 "trtype": "TCP", 00:11:31.551 "adrfam": "IPv4", 00:11:31.551 "traddr": "10.0.0.3", 00:11:31.551 "trsvcid": "4420" 00:11:31.551 }, 00:11:31.551 "peer_address": { 00:11:31.551 "trtype": "TCP", 00:11:31.551 "adrfam": "IPv4", 00:11:31.551 "traddr": "10.0.0.1", 00:11:31.551 "trsvcid": "53366" 00:11:31.551 }, 00:11:31.551 "auth": { 00:11:31.551 "state": "completed", 00:11:31.551 "digest": "sha512", 00:11:31.551 "dhgroup": "null" 00:11:31.551 } 00:11:31.551 } 00:11:31.551 ]' 00:11:31.551 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.808 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:31.808 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.808 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:31.808 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.808 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.808 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.808 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.065 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:32.065 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:32.631 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.631 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:32.632 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.632 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.632 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.632 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.632 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:32.632 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.891 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.149 00:11:33.149 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.149 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.149 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.408 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.408 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.408 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.408 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.408 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.408 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.408 { 00:11:33.408 "cntlid": 103, 00:11:33.408 "qid": 0, 00:11:33.408 "state": "enabled", 00:11:33.408 "thread": "nvmf_tgt_poll_group_000", 00:11:33.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:33.408 "listen_address": { 00:11:33.408 "trtype": "TCP", 00:11:33.408 "adrfam": "IPv4", 00:11:33.408 "traddr": "10.0.0.3", 00:11:33.408 "trsvcid": "4420" 00:11:33.408 }, 00:11:33.408 "peer_address": { 00:11:33.408 "trtype": "TCP", 00:11:33.408 "adrfam": "IPv4", 00:11:33.408 "traddr": "10.0.0.1", 00:11:33.408 "trsvcid": "53400" 00:11:33.408 }, 00:11:33.408 "auth": { 00:11:33.408 "state": "completed", 00:11:33.408 "digest": "sha512", 00:11:33.408 "dhgroup": "null" 00:11:33.408 } 00:11:33.408 } 00:11:33.408 ]' 00:11:33.408 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.408 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:33.408 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.666 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:33.666 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.666 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.666 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.666 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.925 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:33.925 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:34.493 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.493 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:34.493 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.493 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.493 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.493 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.493 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.493 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:34.493 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.752 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.040 00:11:35.040 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.040 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.040 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.308 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.308 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.308 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.308 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.308 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.308 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.308 { 00:11:35.308 "cntlid": 105, 00:11:35.308 "qid": 0, 00:11:35.308 "state": "enabled", 00:11:35.308 "thread": "nvmf_tgt_poll_group_000", 00:11:35.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:35.308 "listen_address": { 00:11:35.308 "trtype": "TCP", 00:11:35.308 "adrfam": "IPv4", 00:11:35.308 "traddr": "10.0.0.3", 00:11:35.308 "trsvcid": "4420" 00:11:35.308 }, 00:11:35.308 "peer_address": { 00:11:35.308 "trtype": "TCP", 00:11:35.308 "adrfam": "IPv4", 00:11:35.308 "traddr": "10.0.0.1", 00:11:35.308 "trsvcid": "57014" 00:11:35.308 }, 00:11:35.308 "auth": { 00:11:35.308 "state": "completed", 00:11:35.308 "digest": "sha512", 00:11:35.308 "dhgroup": "ffdhe2048" 00:11:35.308 } 00:11:35.308 } 00:11:35.308 ]' 00:11:35.308 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.308 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:35.308 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.308 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:35.308 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.567 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.567 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.567 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.826 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:35.826 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:36.394 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.394 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:36.394 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.394 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.394 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.394 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.394 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:36.394 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.653 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.912 00:11:36.912 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.912 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.912 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.171 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.171 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.171 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.171 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.171 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.171 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.171 { 00:11:37.171 "cntlid": 107, 00:11:37.171 "qid": 0, 00:11:37.171 "state": "enabled", 00:11:37.171 "thread": "nvmf_tgt_poll_group_000", 00:11:37.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:37.171 "listen_address": { 00:11:37.171 "trtype": "TCP", 00:11:37.171 "adrfam": "IPv4", 00:11:37.171 "traddr": "10.0.0.3", 00:11:37.171 "trsvcid": "4420" 00:11:37.171 }, 00:11:37.171 "peer_address": { 00:11:37.171 "trtype": "TCP", 00:11:37.171 "adrfam": "IPv4", 00:11:37.171 "traddr": "10.0.0.1", 00:11:37.171 "trsvcid": "57042" 00:11:37.171 }, 00:11:37.171 "auth": { 00:11:37.171 "state": "completed", 00:11:37.171 "digest": "sha512", 00:11:37.171 "dhgroup": "ffdhe2048" 00:11:37.171 } 00:11:37.171 } 00:11:37.171 ]' 00:11:37.171 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.171 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:37.171 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.430 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:37.430 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.430 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.430 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.430 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.687 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:37.687 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:38.254 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.254 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:38.254 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.254 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.513 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.513 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.513 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:38.513 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:38.772 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:38.772 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.772 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:38.772 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:38.772 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:38.772 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.772 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.773 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.773 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.773 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.773 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.773 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.773 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.031 00:11:39.031 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.031 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.031 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.291 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.291 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.291 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.291 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.291 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.291 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.291 { 00:11:39.291 "cntlid": 109, 00:11:39.291 "qid": 0, 00:11:39.291 "state": "enabled", 00:11:39.291 "thread": "nvmf_tgt_poll_group_000", 00:11:39.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:39.291 "listen_address": { 00:11:39.291 "trtype": "TCP", 00:11:39.291 "adrfam": "IPv4", 00:11:39.291 "traddr": "10.0.0.3", 00:11:39.291 "trsvcid": "4420" 00:11:39.291 }, 00:11:39.291 "peer_address": { 00:11:39.291 "trtype": "TCP", 00:11:39.291 "adrfam": "IPv4", 00:11:39.291 "traddr": "10.0.0.1", 00:11:39.291 "trsvcid": "57064" 00:11:39.291 }, 00:11:39.291 "auth": { 00:11:39.291 "state": "completed", 00:11:39.291 "digest": "sha512", 00:11:39.291 "dhgroup": "ffdhe2048" 00:11:39.291 } 00:11:39.291 } 00:11:39.291 ]' 00:11:39.291 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.291 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:39.291 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.291 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:39.291 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.549 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.549 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.549 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.808 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:39.808 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:40.375 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.375 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:40.375 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.375 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.375 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.376 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.376 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:40.376 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:40.634 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:40.634 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.635 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:40.635 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:40.635 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:40.635 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.635 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:11:40.635 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.635 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.635 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.635 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:40.635 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:40.635 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:40.893 00:11:40.893 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.893 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.893 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.150 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.150 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.150 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.150 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.150 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.150 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.150 { 00:11:41.150 "cntlid": 111, 00:11:41.150 "qid": 0, 00:11:41.150 "state": "enabled", 00:11:41.150 "thread": "nvmf_tgt_poll_group_000", 00:11:41.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:41.150 "listen_address": { 00:11:41.150 "trtype": "TCP", 00:11:41.150 "adrfam": "IPv4", 00:11:41.150 "traddr": "10.0.0.3", 00:11:41.150 "trsvcid": "4420" 00:11:41.150 }, 00:11:41.150 "peer_address": { 00:11:41.150 "trtype": "TCP", 00:11:41.150 "adrfam": "IPv4", 00:11:41.150 "traddr": "10.0.0.1", 00:11:41.150 "trsvcid": "57090" 00:11:41.150 }, 00:11:41.150 "auth": { 00:11:41.150 "state": "completed", 00:11:41.150 "digest": "sha512", 00:11:41.150 "dhgroup": "ffdhe2048" 00:11:41.150 } 00:11:41.150 } 00:11:41.150 ]' 00:11:41.150 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.150 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:41.150 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.408 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:41.408 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.408 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.408 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.408 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.666 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:41.666 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:42.233 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.233 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:42.233 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.233 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.493 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.493 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.493 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.493 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:42.493 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.493 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.060 00:11:43.060 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.060 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.060 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.319 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.319 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.319 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.319 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.319 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.319 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.319 { 00:11:43.319 "cntlid": 113, 00:11:43.319 "qid": 0, 00:11:43.319 "state": "enabled", 00:11:43.319 "thread": "nvmf_tgt_poll_group_000", 00:11:43.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:43.319 "listen_address": { 00:11:43.319 "trtype": "TCP", 00:11:43.319 "adrfam": "IPv4", 00:11:43.319 "traddr": "10.0.0.3", 00:11:43.319 "trsvcid": "4420" 00:11:43.319 }, 00:11:43.319 "peer_address": { 00:11:43.319 "trtype": "TCP", 00:11:43.319 "adrfam": "IPv4", 00:11:43.319 "traddr": "10.0.0.1", 00:11:43.319 "trsvcid": "57126" 00:11:43.319 }, 00:11:43.319 "auth": { 00:11:43.319 "state": "completed", 00:11:43.319 "digest": "sha512", 00:11:43.319 "dhgroup": "ffdhe3072" 00:11:43.319 } 00:11:43.319 } 00:11:43.319 ]' 00:11:43.319 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.319 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:43.319 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.319 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:43.319 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.319 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.319 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.319 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.887 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:43.887 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:44.146 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.404 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:44.404 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.404 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.404 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.404 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.404 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:44.405 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:44.663 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:44.663 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.663 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:44.663 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:44.663 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:44.663 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.663 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.663 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.663 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.663 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.664 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.664 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.664 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.922 00:11:44.922 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.923 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.923 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.181 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.181 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.181 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.181 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.181 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.181 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.181 { 00:11:45.181 "cntlid": 115, 00:11:45.181 "qid": 0, 00:11:45.181 "state": "enabled", 00:11:45.181 "thread": "nvmf_tgt_poll_group_000", 00:11:45.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:45.181 "listen_address": { 00:11:45.181 "trtype": "TCP", 00:11:45.181 "adrfam": "IPv4", 00:11:45.181 "traddr": "10.0.0.3", 00:11:45.181 "trsvcid": "4420" 00:11:45.181 }, 00:11:45.181 "peer_address": { 00:11:45.181 "trtype": "TCP", 00:11:45.181 "adrfam": "IPv4", 00:11:45.181 "traddr": "10.0.0.1", 00:11:45.181 "trsvcid": "55028" 00:11:45.181 }, 00:11:45.181 "auth": { 00:11:45.181 "state": "completed", 00:11:45.181 "digest": "sha512", 00:11:45.181 "dhgroup": "ffdhe3072" 00:11:45.181 } 00:11:45.181 } 00:11:45.181 ]' 00:11:45.181 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.181 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:45.181 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.181 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:45.181 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.439 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.439 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.439 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.698 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:45.698 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:46.266 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.266 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:46.266 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.266 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.266 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.266 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.266 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:46.266 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:46.524 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:46.524 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.524 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:46.524 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:46.524 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:46.524 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.524 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.524 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.524 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.524 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.525 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.525 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.525 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.783 00:11:46.783 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.783 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.783 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.042 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.042 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.042 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.042 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.042 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.042 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.042 { 00:11:47.042 "cntlid": 117, 00:11:47.042 "qid": 0, 00:11:47.042 "state": "enabled", 00:11:47.042 "thread": "nvmf_tgt_poll_group_000", 00:11:47.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:47.042 "listen_address": { 00:11:47.042 "trtype": "TCP", 00:11:47.042 "adrfam": "IPv4", 00:11:47.042 "traddr": "10.0.0.3", 00:11:47.042 "trsvcid": "4420" 00:11:47.042 }, 00:11:47.042 "peer_address": { 00:11:47.042 "trtype": "TCP", 00:11:47.042 "adrfam": "IPv4", 00:11:47.042 "traddr": "10.0.0.1", 00:11:47.042 "trsvcid": "55072" 00:11:47.042 }, 00:11:47.042 "auth": { 00:11:47.042 "state": "completed", 00:11:47.042 "digest": "sha512", 00:11:47.042 "dhgroup": "ffdhe3072" 00:11:47.042 } 00:11:47.042 } 00:11:47.042 ]' 00:11:47.042 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.301 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:47.301 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.301 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:47.301 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.301 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.301 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.301 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.559 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:47.559 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:48.496 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.496 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:48.496 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.496 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.496 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.496 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.496 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:48.496 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.496 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.064 00:11:49.064 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.064 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.064 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.323 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.323 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.323 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.323 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.323 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.323 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.323 { 00:11:49.323 "cntlid": 119, 00:11:49.323 "qid": 0, 00:11:49.323 "state": "enabled", 00:11:49.323 "thread": "nvmf_tgt_poll_group_000", 00:11:49.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:49.323 "listen_address": { 00:11:49.323 "trtype": "TCP", 00:11:49.323 "adrfam": "IPv4", 00:11:49.323 "traddr": "10.0.0.3", 00:11:49.323 "trsvcid": "4420" 00:11:49.323 }, 00:11:49.323 "peer_address": { 00:11:49.323 "trtype": "TCP", 00:11:49.323 "adrfam": "IPv4", 00:11:49.323 "traddr": "10.0.0.1", 00:11:49.323 "trsvcid": "55100" 00:11:49.323 }, 00:11:49.323 "auth": { 00:11:49.323 "state": "completed", 00:11:49.323 "digest": "sha512", 00:11:49.323 "dhgroup": "ffdhe3072" 00:11:49.323 } 00:11:49.323 } 00:11:49.323 ]' 00:11:49.323 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.323 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:49.323 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.323 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:49.323 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.323 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.323 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.323 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.581 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:49.581 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:50.517 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.517 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:50.517 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.517 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.517 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.517 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.517 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.517 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:50.517 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.776 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.034 00:11:51.034 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.034 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.034 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.292 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.292 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.292 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.292 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.292 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.292 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.292 { 00:11:51.292 "cntlid": 121, 00:11:51.292 "qid": 0, 00:11:51.292 "state": "enabled", 00:11:51.292 "thread": "nvmf_tgt_poll_group_000", 00:11:51.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:51.292 "listen_address": { 00:11:51.292 "trtype": "TCP", 00:11:51.292 "adrfam": "IPv4", 00:11:51.292 "traddr": "10.0.0.3", 00:11:51.292 "trsvcid": "4420" 00:11:51.292 }, 00:11:51.292 "peer_address": { 00:11:51.292 "trtype": "TCP", 00:11:51.292 "adrfam": "IPv4", 00:11:51.292 "traddr": "10.0.0.1", 00:11:51.292 "trsvcid": "55128" 00:11:51.292 }, 00:11:51.292 "auth": { 00:11:51.292 "state": "completed", 00:11:51.292 "digest": "sha512", 00:11:51.292 "dhgroup": "ffdhe4096" 00:11:51.292 } 00:11:51.292 } 00:11:51.292 ]' 00:11:51.292 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.550 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:51.550 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.550 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:51.550 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.550 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.550 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.551 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.808 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:51.808 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:52.375 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.375 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:52.375 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.375 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.375 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.375 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.375 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:52.375 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.633 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.227 00:11:53.227 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.227 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.227 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.488 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.488 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.488 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.488 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.488 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.488 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.488 { 00:11:53.488 "cntlid": 123, 00:11:53.488 "qid": 0, 00:11:53.488 "state": "enabled", 00:11:53.489 "thread": "nvmf_tgt_poll_group_000", 00:11:53.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:53.489 "listen_address": { 00:11:53.489 "trtype": "TCP", 00:11:53.489 "adrfam": "IPv4", 00:11:53.489 "traddr": "10.0.0.3", 00:11:53.489 "trsvcid": "4420" 00:11:53.489 }, 00:11:53.489 "peer_address": { 00:11:53.489 "trtype": "TCP", 00:11:53.489 "adrfam": "IPv4", 00:11:53.489 "traddr": "10.0.0.1", 00:11:53.489 "trsvcid": "55156" 00:11:53.489 }, 00:11:53.489 "auth": { 00:11:53.489 "state": "completed", 00:11:53.489 "digest": "sha512", 00:11:53.489 "dhgroup": "ffdhe4096" 00:11:53.489 } 00:11:53.489 } 00:11:53.489 ]' 00:11:53.489 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.489 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:53.489 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.489 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:53.489 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.489 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.489 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.489 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.748 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:53.748 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:11:54.683 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.683 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:54.683 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.683 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.683 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.683 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.683 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:54.683 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:54.942 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:11:54.942 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.942 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:54.942 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:54.942 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:54.942 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.942 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.942 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.942 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.942 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.942 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.942 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.943 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.201 00:11:55.201 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.201 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.201 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.460 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.460 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.460 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.460 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.460 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.460 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.460 { 00:11:55.460 "cntlid": 125, 00:11:55.460 "qid": 0, 00:11:55.460 "state": "enabled", 00:11:55.460 "thread": "nvmf_tgt_poll_group_000", 00:11:55.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:55.460 "listen_address": { 00:11:55.460 "trtype": "TCP", 00:11:55.460 "adrfam": "IPv4", 00:11:55.460 "traddr": "10.0.0.3", 00:11:55.460 "trsvcid": "4420" 00:11:55.460 }, 00:11:55.460 "peer_address": { 00:11:55.460 "trtype": "TCP", 00:11:55.460 "adrfam": "IPv4", 00:11:55.460 "traddr": "10.0.0.1", 00:11:55.460 "trsvcid": "46196" 00:11:55.460 }, 00:11:55.460 "auth": { 00:11:55.460 "state": "completed", 00:11:55.460 "digest": "sha512", 00:11:55.460 "dhgroup": "ffdhe4096" 00:11:55.460 } 00:11:55.460 } 00:11:55.460 ]' 00:11:55.460 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.460 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.460 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.460 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:55.460 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.718 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.718 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.718 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.975 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:55.975 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:11:56.541 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.541 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:56.541 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.541 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.541 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.541 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.541 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:56.541 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.799 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:57.056 00:11:57.056 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.056 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.056 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.314 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.314 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.314 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.314 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.572 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.572 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.572 { 00:11:57.572 "cntlid": 127, 00:11:57.572 "qid": 0, 00:11:57.572 "state": "enabled", 00:11:57.572 "thread": "nvmf_tgt_poll_group_000", 00:11:57.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:57.572 "listen_address": { 00:11:57.572 "trtype": "TCP", 00:11:57.572 "adrfam": "IPv4", 00:11:57.572 "traddr": "10.0.0.3", 00:11:57.572 "trsvcid": "4420" 00:11:57.572 }, 00:11:57.572 "peer_address": { 00:11:57.572 "trtype": "TCP", 00:11:57.572 "adrfam": "IPv4", 00:11:57.572 "traddr": "10.0.0.1", 00:11:57.572 "trsvcid": "46228" 00:11:57.572 }, 00:11:57.572 "auth": { 00:11:57.572 "state": "completed", 00:11:57.572 "digest": "sha512", 00:11:57.572 "dhgroup": "ffdhe4096" 00:11:57.572 } 00:11:57.572 } 00:11:57.572 ]' 00:11:57.572 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.572 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.572 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.572 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:57.572 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.572 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.572 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.572 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.831 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:57.831 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:11:58.397 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.397 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:11:58.397 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.397 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.397 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.397 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.397 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.397 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:58.397 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.655 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.221 00:11:59.221 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.221 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.221 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.479 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.479 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.479 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.479 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.479 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.479 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.479 { 00:11:59.479 "cntlid": 129, 00:11:59.479 "qid": 0, 00:11:59.479 "state": "enabled", 00:11:59.479 "thread": "nvmf_tgt_poll_group_000", 00:11:59.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:11:59.479 "listen_address": { 00:11:59.479 "trtype": "TCP", 00:11:59.479 "adrfam": "IPv4", 00:11:59.479 "traddr": "10.0.0.3", 00:11:59.479 "trsvcid": "4420" 00:11:59.479 }, 00:11:59.479 "peer_address": { 00:11:59.479 "trtype": "TCP", 00:11:59.479 "adrfam": "IPv4", 00:11:59.479 "traddr": "10.0.0.1", 00:11:59.479 "trsvcid": "46252" 00:11:59.479 }, 00:11:59.479 "auth": { 00:11:59.479 "state": "completed", 00:11:59.479 "digest": "sha512", 00:11:59.479 "dhgroup": "ffdhe6144" 00:11:59.479 } 00:11:59.479 } 00:11:59.479 ]' 00:11:59.479 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.479 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.479 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.479 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:59.479 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.737 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.737 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.737 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.995 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:11:59.995 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:12:00.562 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.562 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:00.562 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.562 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.562 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.562 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.563 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:00.563 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.821 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.396 00:12:01.396 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.396 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.396 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.654 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.654 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.654 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.654 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.654 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.654 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.654 { 00:12:01.654 "cntlid": 131, 00:12:01.654 "qid": 0, 00:12:01.654 "state": "enabled", 00:12:01.654 "thread": "nvmf_tgt_poll_group_000", 00:12:01.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:01.654 "listen_address": { 00:12:01.654 "trtype": "TCP", 00:12:01.654 "adrfam": "IPv4", 00:12:01.654 "traddr": "10.0.0.3", 00:12:01.654 "trsvcid": "4420" 00:12:01.654 }, 00:12:01.654 "peer_address": { 00:12:01.654 "trtype": "TCP", 00:12:01.654 "adrfam": "IPv4", 00:12:01.654 "traddr": "10.0.0.1", 00:12:01.654 "trsvcid": "46292" 00:12:01.654 }, 00:12:01.654 "auth": { 00:12:01.654 "state": "completed", 00:12:01.654 "digest": "sha512", 00:12:01.654 "dhgroup": "ffdhe6144" 00:12:01.654 } 00:12:01.654 } 00:12:01.654 ]' 00:12:01.654 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.654 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.654 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.654 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:01.654 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.914 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.914 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.914 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.173 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:12:02.173 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:12:02.740 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.740 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:02.740 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.740 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.740 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.740 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.740 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:02.740 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.999 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.566 00:12:03.566 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.566 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.566 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.566 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.566 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.566 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.567 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.567 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.567 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.567 { 00:12:03.567 "cntlid": 133, 00:12:03.567 "qid": 0, 00:12:03.567 "state": "enabled", 00:12:03.567 "thread": "nvmf_tgt_poll_group_000", 00:12:03.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:03.567 "listen_address": { 00:12:03.567 "trtype": "TCP", 00:12:03.567 "adrfam": "IPv4", 00:12:03.567 "traddr": "10.0.0.3", 00:12:03.567 "trsvcid": "4420" 00:12:03.567 }, 00:12:03.567 "peer_address": { 00:12:03.567 "trtype": "TCP", 00:12:03.567 "adrfam": "IPv4", 00:12:03.567 "traddr": "10.0.0.1", 00:12:03.567 "trsvcid": "49146" 00:12:03.567 }, 00:12:03.567 "auth": { 00:12:03.567 "state": "completed", 00:12:03.567 "digest": "sha512", 00:12:03.567 "dhgroup": "ffdhe6144" 00:12:03.567 } 00:12:03.567 } 00:12:03.567 ]' 00:12:03.825 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.825 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.825 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.825 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:03.825 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.825 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.825 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.825 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.084 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:12:04.084 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:12:04.651 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.651 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:04.651 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.651 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:04.910 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:05.478 00:12:05.478 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.478 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.478 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.737 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.737 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.737 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.737 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.737 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.737 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.737 { 00:12:05.737 "cntlid": 135, 00:12:05.737 "qid": 0, 00:12:05.737 "state": "enabled", 00:12:05.737 "thread": "nvmf_tgt_poll_group_000", 00:12:05.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:05.737 "listen_address": { 00:12:05.737 "trtype": "TCP", 00:12:05.737 "adrfam": "IPv4", 00:12:05.737 "traddr": "10.0.0.3", 00:12:05.737 "trsvcid": "4420" 00:12:05.737 }, 00:12:05.737 "peer_address": { 00:12:05.737 "trtype": "TCP", 00:12:05.737 "adrfam": "IPv4", 00:12:05.737 "traddr": "10.0.0.1", 00:12:05.737 "trsvcid": "49164" 00:12:05.737 }, 00:12:05.737 "auth": { 00:12:05.737 "state": "completed", 00:12:05.737 "digest": "sha512", 00:12:05.737 "dhgroup": "ffdhe6144" 00:12:05.737 } 00:12:05.737 } 00:12:05.737 ]' 00:12:05.737 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.737 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.996 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.996 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:05.996 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.996 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.996 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.996 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.254 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:12:06.254 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.190 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.128 00:12:08.128 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.128 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.128 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.128 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.128 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.128 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.128 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.387 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.387 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.387 { 00:12:08.387 "cntlid": 137, 00:12:08.387 "qid": 0, 00:12:08.387 "state": "enabled", 00:12:08.387 "thread": "nvmf_tgt_poll_group_000", 00:12:08.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:08.387 "listen_address": { 00:12:08.387 "trtype": "TCP", 00:12:08.387 "adrfam": "IPv4", 00:12:08.387 "traddr": "10.0.0.3", 00:12:08.387 "trsvcid": "4420" 00:12:08.387 }, 00:12:08.387 "peer_address": { 00:12:08.387 "trtype": "TCP", 00:12:08.387 "adrfam": "IPv4", 00:12:08.387 "traddr": "10.0.0.1", 00:12:08.387 "trsvcid": "49198" 00:12:08.387 }, 00:12:08.387 "auth": { 00:12:08.387 "state": "completed", 00:12:08.387 "digest": "sha512", 00:12:08.387 "dhgroup": "ffdhe8192" 00:12:08.387 } 00:12:08.387 } 00:12:08.387 ]' 00:12:08.387 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.387 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.387 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.387 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:08.387 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.387 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.387 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.387 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.646 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:12:08.646 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:12:09.582 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.582 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:09.582 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.582 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.582 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.582 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.582 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:09.582 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.843 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.411 00:12:10.411 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.411 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.411 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.670 { 00:12:10.670 "cntlid": 139, 00:12:10.670 "qid": 0, 00:12:10.670 "state": "enabled", 00:12:10.670 "thread": "nvmf_tgt_poll_group_000", 00:12:10.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:10.670 "listen_address": { 00:12:10.670 "trtype": "TCP", 00:12:10.670 "adrfam": "IPv4", 00:12:10.670 "traddr": "10.0.0.3", 00:12:10.670 "trsvcid": "4420" 00:12:10.670 }, 00:12:10.670 "peer_address": { 00:12:10.670 "trtype": "TCP", 00:12:10.670 "adrfam": "IPv4", 00:12:10.670 "traddr": "10.0.0.1", 00:12:10.670 "trsvcid": "49214" 00:12:10.670 }, 00:12:10.670 "auth": { 00:12:10.670 "state": "completed", 00:12:10.670 "digest": "sha512", 00:12:10.670 "dhgroup": "ffdhe8192" 00:12:10.670 } 00:12:10.670 } 00:12:10.670 ]' 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.670 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.929 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:12:10.929 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: --dhchap-ctrl-secret DHHC-1:02:YmY5OGYxZGViZDU2NzA4MGExZDg4ODc4NDNjMjY0ZTg2ZGEyNDBiMTEzMzFhOWU5nQ03ow==: 00:12:11.495 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.496 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:11.496 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.496 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.496 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.496 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.496 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:11.496 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:11.754 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:11.754 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.754 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:11.754 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:11.754 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:11.754 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.754 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.754 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.754 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.013 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.013 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.013 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.013 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.581 00:12:12.581 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.581 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.581 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.840 { 00:12:12.840 "cntlid": 141, 00:12:12.840 "qid": 0, 00:12:12.840 "state": "enabled", 00:12:12.840 "thread": "nvmf_tgt_poll_group_000", 00:12:12.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:12.840 "listen_address": { 00:12:12.840 "trtype": "TCP", 00:12:12.840 "adrfam": "IPv4", 00:12:12.840 "traddr": "10.0.0.3", 00:12:12.840 "trsvcid": "4420" 00:12:12.840 }, 00:12:12.840 "peer_address": { 00:12:12.840 "trtype": "TCP", 00:12:12.840 "adrfam": "IPv4", 00:12:12.840 "traddr": "10.0.0.1", 00:12:12.840 "trsvcid": "49222" 00:12:12.840 }, 00:12:12.840 "auth": { 00:12:12.840 "state": "completed", 00:12:12.840 "digest": "sha512", 00:12:12.840 "dhgroup": "ffdhe8192" 00:12:12.840 } 00:12:12.840 } 00:12:12.840 ]' 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.840 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.099 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:12:13.099 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:01:ZWFlYTk4OTUzYmJiOGEwM2U3MjhmNWQ2OTM2YWY4MWE2zdO4: 00:12:14.064 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.064 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:14.064 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.064 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.064 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.064 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.064 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:14.064 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:14.330 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:14.330 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.331 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:14.331 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:14.331 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:14.331 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.331 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:12:14.331 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.331 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.331 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.331 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:14.331 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.331 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.897 00:12:14.897 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.897 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.897 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.156 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.156 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.156 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.156 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.156 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.156 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.156 { 00:12:15.156 "cntlid": 143, 00:12:15.156 "qid": 0, 00:12:15.156 "state": "enabled", 00:12:15.156 "thread": "nvmf_tgt_poll_group_000", 00:12:15.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:15.156 "listen_address": { 00:12:15.156 "trtype": "TCP", 00:12:15.156 "adrfam": "IPv4", 00:12:15.156 "traddr": "10.0.0.3", 00:12:15.156 "trsvcid": "4420" 00:12:15.156 }, 00:12:15.156 "peer_address": { 00:12:15.156 "trtype": "TCP", 00:12:15.156 "adrfam": "IPv4", 00:12:15.156 "traddr": "10.0.0.1", 00:12:15.156 "trsvcid": "38316" 00:12:15.156 }, 00:12:15.156 "auth": { 00:12:15.156 "state": "completed", 00:12:15.156 "digest": "sha512", 00:12:15.156 "dhgroup": "ffdhe8192" 00:12:15.156 } 00:12:15.156 } 00:12:15.156 ]' 00:12:15.156 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.156 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:15.156 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.415 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:15.415 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.415 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.415 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.415 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.674 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:12:15.674 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:12:16.240 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.240 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:16.240 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.240 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.240 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.240 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:16.240 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:16.240 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:16.240 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:16.240 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:16.240 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.498 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.080 00:12:17.080 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.080 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.080 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.339 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.339 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.339 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.339 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.339 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.339 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.339 { 00:12:17.339 "cntlid": 145, 00:12:17.339 "qid": 0, 00:12:17.339 "state": "enabled", 00:12:17.339 "thread": "nvmf_tgt_poll_group_000", 00:12:17.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:17.339 "listen_address": { 00:12:17.339 "trtype": "TCP", 00:12:17.339 "adrfam": "IPv4", 00:12:17.339 "traddr": "10.0.0.3", 00:12:17.339 "trsvcid": "4420" 00:12:17.339 }, 00:12:17.339 "peer_address": { 00:12:17.339 "trtype": "TCP", 00:12:17.339 "adrfam": "IPv4", 00:12:17.339 "traddr": "10.0.0.1", 00:12:17.339 "trsvcid": "38342" 00:12:17.339 }, 00:12:17.339 "auth": { 00:12:17.339 "state": "completed", 00:12:17.339 "digest": "sha512", 00:12:17.339 "dhgroup": "ffdhe8192" 00:12:17.339 } 00:12:17.339 } 00:12:17.339 ]' 00:12:17.339 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.598 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.598 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.598 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:17.598 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.598 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.598 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.598 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.857 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:12:17.857 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:00:ZmM2MTBmZmVmOTljODgyOGIzYjU2MzgwNjM0YjI0MWYwMmU0NGM0ZWQ1N2IyNmIzW0NJug==: --dhchap-ctrl-secret DHHC-1:03:NzdmYzljMzg5ZDljOTY0NDdkNzBmMTU4NjgxODFlNzA5YmFmODE2MDQ2YzdjNmIwN2YzNzgxOGZkNjIzZTgwNoDt8r0=: 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:18.791 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:19.359 request: 00:12:19.359 { 00:12:19.359 "name": "nvme0", 00:12:19.359 "trtype": "tcp", 00:12:19.359 "traddr": "10.0.0.3", 00:12:19.359 "adrfam": "ipv4", 00:12:19.359 "trsvcid": "4420", 00:12:19.359 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:19.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:19.359 "prchk_reftag": false, 00:12:19.359 "prchk_guard": false, 00:12:19.359 "hdgst": false, 00:12:19.359 "ddgst": false, 00:12:19.359 "dhchap_key": "key2", 00:12:19.359 "allow_unrecognized_csi": false, 00:12:19.359 "method": "bdev_nvme_attach_controller", 00:12:19.359 "req_id": 1 00:12:19.359 } 00:12:19.359 Got JSON-RPC error response 00:12:19.359 response: 00:12:19.359 { 00:12:19.359 "code": -5, 00:12:19.359 "message": "Input/output error" 00:12:19.359 } 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:19.360 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:19.928 request: 00:12:19.928 { 00:12:19.928 "name": "nvme0", 00:12:19.928 "trtype": "tcp", 00:12:19.928 "traddr": "10.0.0.3", 00:12:19.928 "adrfam": "ipv4", 00:12:19.928 "trsvcid": "4420", 00:12:19.928 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:19.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:19.928 "prchk_reftag": false, 00:12:19.928 "prchk_guard": false, 00:12:19.928 "hdgst": false, 00:12:19.928 "ddgst": false, 00:12:19.928 "dhchap_key": "key1", 00:12:19.928 "dhchap_ctrlr_key": "ckey2", 00:12:19.928 "allow_unrecognized_csi": false, 00:12:19.928 "method": "bdev_nvme_attach_controller", 00:12:19.928 "req_id": 1 00:12:19.928 } 00:12:19.928 Got JSON-RPC error response 00:12:19.928 response: 00:12:19.928 { 00:12:19.928 "code": -5, 00:12:19.928 "message": "Input/output error" 00:12:19.928 } 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.928 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.496 request: 00:12:20.496 { 00:12:20.496 "name": "nvme0", 00:12:20.496 "trtype": "tcp", 00:12:20.496 "traddr": "10.0.0.3", 00:12:20.496 "adrfam": "ipv4", 00:12:20.496 "trsvcid": "4420", 00:12:20.496 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:20.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:20.496 "prchk_reftag": false, 00:12:20.496 "prchk_guard": false, 00:12:20.496 "hdgst": false, 00:12:20.496 "ddgst": false, 00:12:20.496 "dhchap_key": "key1", 00:12:20.496 "dhchap_ctrlr_key": "ckey1", 00:12:20.496 "allow_unrecognized_csi": false, 00:12:20.496 "method": "bdev_nvme_attach_controller", 00:12:20.496 "req_id": 1 00:12:20.496 } 00:12:20.496 Got JSON-RPC error response 00:12:20.496 response: 00:12:20.496 { 00:12:20.496 "code": -5, 00:12:20.496 "message": "Input/output error" 00:12:20.496 } 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 66868 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 66868 ']' 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 66868 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66868 00:12:20.496 killing process with pid 66868 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66868' 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 66868 00:12:20.496 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 66868 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=69912 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 69912 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 69912 ']' 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:20.755 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 69912 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 69912 ']' 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:21.013 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.580 null0 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Wr2 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.CCi ]] 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CCi 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OYB 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.5Ch ]] 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5Ch 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OSG 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.cxg ]] 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cxg 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:21.580 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.NPr 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.581 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:22.516 nvme0n1 00:12:22.516 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.516 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.516 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.775 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.775 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.775 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.775 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.775 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.775 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.775 { 00:12:22.775 "cntlid": 1, 00:12:22.775 "qid": 0, 00:12:22.775 "state": "enabled", 00:12:22.775 "thread": "nvmf_tgt_poll_group_000", 00:12:22.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:22.775 "listen_address": { 00:12:22.775 "trtype": "TCP", 00:12:22.775 "adrfam": "IPv4", 00:12:22.775 "traddr": "10.0.0.3", 00:12:22.775 "trsvcid": "4420" 00:12:22.775 }, 00:12:22.775 "peer_address": { 00:12:22.775 "trtype": "TCP", 00:12:22.775 "adrfam": "IPv4", 00:12:22.775 "traddr": "10.0.0.1", 00:12:22.775 "trsvcid": "38400" 00:12:22.775 }, 00:12:22.775 "auth": { 00:12:22.775 "state": "completed", 00:12:22.775 "digest": "sha512", 00:12:22.775 "dhgroup": "ffdhe8192" 00:12:22.775 } 00:12:22.775 } 00:12:22.775 ]' 00:12:22.775 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.775 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:22.775 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.034 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.034 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.034 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.034 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.034 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.292 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:12:23.292 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:12:23.859 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.117 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:24.117 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.117 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.117 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.117 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key3 00:12:24.117 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.117 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.117 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.117 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:24.117 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:24.376 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:24.376 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:24.376 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:24.376 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:24.376 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:24.376 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:24.376 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:24.376 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:24.376 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.376 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.635 request: 00:12:24.635 { 00:12:24.635 "name": "nvme0", 00:12:24.635 "trtype": "tcp", 00:12:24.635 "traddr": "10.0.0.3", 00:12:24.635 "adrfam": "ipv4", 00:12:24.635 "trsvcid": "4420", 00:12:24.635 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:24.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:24.635 "prchk_reftag": false, 00:12:24.635 "prchk_guard": false, 00:12:24.635 "hdgst": false, 00:12:24.635 "ddgst": false, 00:12:24.635 "dhchap_key": "key3", 00:12:24.635 "allow_unrecognized_csi": false, 00:12:24.635 "method": "bdev_nvme_attach_controller", 00:12:24.635 "req_id": 1 00:12:24.635 } 00:12:24.635 Got JSON-RPC error response 00:12:24.635 response: 00:12:24.635 { 00:12:24.635 "code": -5, 00:12:24.635 "message": "Input/output error" 00:12:24.635 } 00:12:24.635 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:24.635 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:24.635 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:24.635 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:24.635 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:24.635 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:24.635 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:24.635 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:24.893 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:24.893 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:24.893 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:24.893 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:24.893 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:24.893 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:24.893 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:24.893 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:24.893 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.893 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.152 request: 00:12:25.152 { 00:12:25.152 "name": "nvme0", 00:12:25.152 "trtype": "tcp", 00:12:25.152 "traddr": "10.0.0.3", 00:12:25.152 "adrfam": "ipv4", 00:12:25.152 "trsvcid": "4420", 00:12:25.152 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:25.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:25.152 "prchk_reftag": false, 00:12:25.152 "prchk_guard": false, 00:12:25.152 "hdgst": false, 00:12:25.152 "ddgst": false, 00:12:25.152 "dhchap_key": "key3", 00:12:25.152 "allow_unrecognized_csi": false, 00:12:25.152 "method": "bdev_nvme_attach_controller", 00:12:25.152 "req_id": 1 00:12:25.152 } 00:12:25.152 Got JSON-RPC error response 00:12:25.152 response: 00:12:25.152 { 00:12:25.152 "code": -5, 00:12:25.152 "message": "Input/output error" 00:12:25.152 } 00:12:25.152 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:25.152 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:25.152 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:25.152 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:25.152 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:25.152 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:25.152 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:25.152 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:25.152 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:25.153 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:25.411 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:25.979 request: 00:12:25.979 { 00:12:25.979 "name": "nvme0", 00:12:25.979 "trtype": "tcp", 00:12:25.979 "traddr": "10.0.0.3", 00:12:25.979 "adrfam": "ipv4", 00:12:25.979 "trsvcid": "4420", 00:12:25.979 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:25.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:25.979 "prchk_reftag": false, 00:12:25.979 "prchk_guard": false, 00:12:25.979 "hdgst": false, 00:12:25.979 "ddgst": false, 00:12:25.979 "dhchap_key": "key0", 00:12:25.979 "dhchap_ctrlr_key": "key1", 00:12:25.979 "allow_unrecognized_csi": false, 00:12:25.979 "method": "bdev_nvme_attach_controller", 00:12:25.979 "req_id": 1 00:12:25.979 } 00:12:25.979 Got JSON-RPC error response 00:12:25.979 response: 00:12:25.979 { 00:12:25.979 "code": -5, 00:12:25.979 "message": "Input/output error" 00:12:25.979 } 00:12:25.979 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:25.979 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:25.979 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:25.979 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:25.979 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:25.979 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:25.979 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:26.238 nvme0n1 00:12:26.238 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:26.238 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.238 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:26.497 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.497 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.497 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.756 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 00:12:26.756 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.756 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.756 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.756 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:26.756 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:26.756 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:27.691 nvme0n1 00:12:27.691 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:27.691 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:27.691 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.258 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.258 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:28.258 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.258 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.258 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.258 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:28.258 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:28.258 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.517 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.517 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:12:28.517 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid 96df7a2d-651c-49c0-b1c8-dd965eb48096 -l 0 --dhchap-secret DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: --dhchap-ctrl-secret DHHC-1:03:ZDI0OTRjZjU4OTljOWYxOTgxZjZlMzljZmVlODQ0ZDA4ZGZhZTYyNzMwNzg5YWY3ZTkzN2FkMzYyZGJkMDAxNnEUoUg=: 00:12:29.085 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:29.085 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:29.085 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:29.085 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:29.085 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:29.085 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:29.085 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:29.085 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.085 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.344 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:29.344 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:29.344 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:29.344 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:29.344 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.344 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:29.344 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.344 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:29.344 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:29.344 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:29.912 request: 00:12:29.912 { 00:12:29.912 "name": "nvme0", 00:12:29.912 "trtype": "tcp", 00:12:29.912 "traddr": "10.0.0.3", 00:12:29.912 "adrfam": "ipv4", 00:12:29.912 "trsvcid": "4420", 00:12:29.912 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:29.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096", 00:12:29.912 "prchk_reftag": false, 00:12:29.912 "prchk_guard": false, 00:12:29.912 "hdgst": false, 00:12:29.912 "ddgst": false, 00:12:29.912 "dhchap_key": "key1", 00:12:29.912 "allow_unrecognized_csi": false, 00:12:29.912 "method": "bdev_nvme_attach_controller", 00:12:29.912 "req_id": 1 00:12:29.912 } 00:12:29.912 Got JSON-RPC error response 00:12:29.912 response: 00:12:29.912 { 00:12:29.912 "code": -5, 00:12:29.912 "message": "Input/output error" 00:12:29.912 } 00:12:29.912 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:29.912 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:29.912 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:29.912 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:29.912 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:29.912 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:29.912 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:30.944 nvme0n1 00:12:30.945 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:30.945 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.945 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:31.216 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.216 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.216 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.474 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:31.474 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.474 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.474 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.474 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:31.474 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:31.475 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:31.733 nvme0n1 00:12:31.992 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:31.992 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:31.992 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.250 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.250 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.250 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: '' 2s 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: ]] 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NjUyY2YzNTFiZjZkYWYwNjNiNTUwNWM2YTk0NTE5MmKUmhmi: 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:32.509 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: 2s 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: ]] 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTEzOGQ4ODBmMGE2NWRmZDNmYzE4ZGU1Y2U3MTYxYTU4ODk1OGNiMTNiMjg2NDQyJwMJfg==: 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:34.411 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:36.943 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:37.510 nvme0n1 00:12:37.510 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:37.510 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.511 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.511 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.511 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:37.511 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:38.077 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:12:38.077 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.077 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:12:38.645 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.645 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:38.645 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.645 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.645 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.645 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:12:38.645 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:12:38.645 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:12:38.645 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:12:38.645 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:39.212 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:39.779 request: 00:12:39.779 { 00:12:39.779 "name": "nvme0", 00:12:39.779 "dhchap_key": "key1", 00:12:39.779 "dhchap_ctrlr_key": "key3", 00:12:39.779 "method": "bdev_nvme_set_keys", 00:12:39.779 "req_id": 1 00:12:39.779 } 00:12:39.779 Got JSON-RPC error response 00:12:39.779 response: 00:12:39.779 { 00:12:39.779 "code": -13, 00:12:39.779 "message": "Permission denied" 00:12:39.779 } 00:12:39.779 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:39.779 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.779 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.779 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.779 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:39.779 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:39.779 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.037 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:12:40.037 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:40.977 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:40.977 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.977 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:41.236 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:41.236 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:41.236 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.236 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.236 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.236 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:41.236 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:41.236 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:42.171 nvme0n1 00:12:42.171 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:42.171 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.171 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.172 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.172 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:42.172 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:42.172 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:42.172 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:42.172 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:42.172 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:42.172 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:42.172 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:42.172 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:43.106 request: 00:12:43.106 { 00:12:43.106 "name": "nvme0", 00:12:43.106 "dhchap_key": "key2", 00:12:43.106 "dhchap_ctrlr_key": "key0", 00:12:43.106 "method": "bdev_nvme_set_keys", 00:12:43.106 "req_id": 1 00:12:43.106 } 00:12:43.106 Got JSON-RPC error response 00:12:43.106 response: 00:12:43.106 { 00:12:43.106 "code": -13, 00:12:43.106 "message": "Permission denied" 00:12:43.106 } 00:12:43.106 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:43.106 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.106 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.106 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.106 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:43.106 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:43.106 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.106 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:43.106 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:44.041 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:44.041 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:44.041 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.300 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:44.300 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:44.300 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:44.300 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 66892 00:12:44.300 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 66892 ']' 00:12:44.300 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 66892 00:12:44.300 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:12:44.300 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:44.300 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66892 00:12:44.558 killing process with pid 66892 00:12:44.558 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:44.558 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:44.558 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66892' 00:12:44.558 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 66892 00:12:44.559 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 66892 00:12:44.559 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:44.559 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:44.559 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:44.818 rmmod nvme_tcp 00:12:44.818 rmmod nvme_fabrics 00:12:44.818 rmmod nvme_keyring 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 69912 ']' 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 69912 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 69912 ']' 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 69912 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69912 00:12:44.818 killing process with pid 69912 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69912' 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 69912 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 69912 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:12:44.818 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.076 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.335 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:45.336 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Wr2 /tmp/spdk.key-sha256.OYB /tmp/spdk.key-sha384.OSG /tmp/spdk.key-sha512.NPr /tmp/spdk.key-sha512.CCi /tmp/spdk.key-sha384.5Ch /tmp/spdk.key-sha256.cxg '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:45.336 00:12:45.336 real 3m6.503s 00:12:45.336 user 7m26.554s 00:12:45.336 sys 0m30.535s 00:12:45.336 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:45.336 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.336 ************************************ 00:12:45.336 END TEST nvmf_auth_target 00:12:45.336 ************************************ 00:12:45.336 10:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:45.336 10:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:45.336 10:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:45.336 10:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:45.336 10:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.336 ************************************ 00:12:45.336 START TEST nvmf_bdevio_no_huge 00:12:45.336 ************************************ 00:12:45.336 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:45.336 * Looking for test storage... 00:12:45.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:45.336 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:45.336 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:12:45.336 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.336 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:45.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.596 --rc genhtml_branch_coverage=1 00:12:45.596 --rc genhtml_function_coverage=1 00:12:45.596 --rc genhtml_legend=1 00:12:45.596 --rc geninfo_all_blocks=1 00:12:45.596 --rc geninfo_unexecuted_blocks=1 00:12:45.596 00:12:45.596 ' 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:45.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.596 --rc genhtml_branch_coverage=1 00:12:45.596 --rc genhtml_function_coverage=1 00:12:45.596 --rc genhtml_legend=1 00:12:45.596 --rc geninfo_all_blocks=1 00:12:45.596 --rc geninfo_unexecuted_blocks=1 00:12:45.596 00:12:45.596 ' 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:45.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.596 --rc genhtml_branch_coverage=1 00:12:45.596 --rc genhtml_function_coverage=1 00:12:45.596 --rc genhtml_legend=1 00:12:45.596 --rc geninfo_all_blocks=1 00:12:45.596 --rc geninfo_unexecuted_blocks=1 00:12:45.596 00:12:45.596 ' 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:45.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.596 --rc genhtml_branch_coverage=1 00:12:45.596 --rc genhtml_function_coverage=1 00:12:45.596 --rc genhtml_legend=1 00:12:45.596 --rc geninfo_all_blocks=1 00:12:45.596 --rc geninfo_unexecuted_blocks=1 00:12:45.596 00:12:45.596 ' 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.596 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.596 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:45.597 Cannot find device "nvmf_init_br" 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:45.597 Cannot find device "nvmf_init_br2" 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:45.597 Cannot find device "nvmf_tgt_br" 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:45.597 Cannot find device "nvmf_tgt_br2" 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:45.597 Cannot find device "nvmf_init_br" 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:45.597 Cannot find device "nvmf_init_br2" 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:45.597 Cannot find device "nvmf_tgt_br" 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:45.597 Cannot find device "nvmf_tgt_br2" 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:45.597 Cannot find device "nvmf_br" 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:45.597 Cannot find device "nvmf_init_if" 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:45.597 Cannot find device "nvmf_init_if2" 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:45.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:45.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:45.597 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:45.856 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:45.857 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:45.857 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:12:45.857 00:12:45.857 --- 10.0.0.3 ping statistics --- 00:12:45.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.857 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:45.857 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:45.857 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:12:45.857 00:12:45.857 --- 10.0.0.4 ping statistics --- 00:12:45.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.857 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:45.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:45.857 00:12:45.857 --- 10.0.0.1 ping statistics --- 00:12:45.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.857 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:45.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:12:45.857 00:12:45.857 --- 10.0.0.2 ping statistics --- 00:12:45.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.857 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70548 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70548 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 70548 ']' 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:45.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:45.857 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:45.857 [2024-11-12 10:34:34.598537] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:12:45.857 [2024-11-12 10:34:34.598653] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:46.116 [2024-11-12 10:34:34.763892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.116 [2024-11-12 10:34:34.836494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.117 [2024-11-12 10:34:34.836559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.117 [2024-11-12 10:34:34.836573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.117 [2024-11-12 10:34:34.836583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.117 [2024-11-12 10:34:34.836592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.117 [2024-11-12 10:34:34.837216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:46.117 [2024-11-12 10:34:34.837349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:46.117 [2024-11-12 10:34:34.837456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:46.117 [2024-11-12 10:34:34.837460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.117 [2024-11-12 10:34:34.842994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 [2024-11-12 10:34:35.684288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 Malloc0 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 [2024-11-12 10:34:35.722690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:47.054 { 00:12:47.054 "params": { 00:12:47.054 "name": "Nvme$subsystem", 00:12:47.054 "trtype": "$TEST_TRANSPORT", 00:12:47.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:47.054 "adrfam": "ipv4", 00:12:47.054 "trsvcid": "$NVMF_PORT", 00:12:47.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:47.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:47.054 "hdgst": ${hdgst:-false}, 00:12:47.054 "ddgst": ${ddgst:-false} 00:12:47.054 }, 00:12:47.054 "method": "bdev_nvme_attach_controller" 00:12:47.054 } 00:12:47.054 EOF 00:12:47.054 )") 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:12:47.054 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:47.054 "params": { 00:12:47.054 "name": "Nvme1", 00:12:47.054 "trtype": "tcp", 00:12:47.054 "traddr": "10.0.0.3", 00:12:47.054 "adrfam": "ipv4", 00:12:47.054 "trsvcid": "4420", 00:12:47.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:47.054 "hdgst": false, 00:12:47.054 "ddgst": false 00:12:47.054 }, 00:12:47.054 "method": "bdev_nvme_attach_controller" 00:12:47.054 }' 00:12:47.054 [2024-11-12 10:34:35.781548] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:12:47.054 [2024-11-12 10:34:35.781636] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70590 ] 00:12:47.312 [2024-11-12 10:34:35.943421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:47.312 [2024-11-12 10:34:36.018619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.312 [2024-11-12 10:34:36.018744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.312 [2024-11-12 10:34:36.018763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.312 [2024-11-12 10:34:36.033126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:47.572 I/O targets: 00:12:47.572 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:47.572 00:12:47.572 00:12:47.572 CUnit - A unit testing framework for C - Version 2.1-3 00:12:47.572 http://cunit.sourceforge.net/ 00:12:47.572 00:12:47.572 00:12:47.572 Suite: bdevio tests on: Nvme1n1 00:12:47.572 Test: blockdev write read block ...passed 00:12:47.572 Test: blockdev write zeroes read block ...passed 00:12:47.572 Test: blockdev write zeroes read no split ...passed 00:12:47.572 Test: blockdev write zeroes read split ...passed 00:12:47.572 Test: blockdev write zeroes read split partial ...passed 00:12:47.572 Test: blockdev reset ...[2024-11-12 10:34:36.264631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:47.572 [2024-11-12 10:34:36.264749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68b310 (9): Bad file descriptor 00:12:47.572 [2024-11-12 10:34:36.285050] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:47.572 passed 00:12:47.572 Test: blockdev write read 8 blocks ...passed 00:12:47.572 Test: blockdev write read size > 128k ...passed 00:12:47.572 Test: blockdev write read invalid size ...passed 00:12:47.572 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.572 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.572 Test: blockdev write read max offset ...passed 00:12:47.572 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.572 Test: blockdev writev readv 8 blocks ...passed 00:12:47.572 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.572 Test: blockdev writev readv block ...passed 00:12:47.572 Test: blockdev writev readv size > 128k ...passed 00:12:47.572 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.572 Test: blockdev comparev and writev ...[2024-11-12 10:34:36.295253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:47.572 [2024-11-12 10:34:36.295296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:47.572 [2024-11-12 10:34:36.295317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:47.572 [2024-11-12 10:34:36.295328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:47.572 [2024-11-12 10:34:36.295615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:47.572 [2024-11-12 10:34:36.295644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:47.572 [2024-11-12 10:34:36.295662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:47.572 [2024-11-12 10:34:36.295900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:47.572 [2024-11-12 10:34:36.296216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:47.572 [2024-11-12 10:34:36.296248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:47.572 [2024-11-12 10:34:36.296267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:47.572 [2024-11-12 10:34:36.296277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:47.572 [2024-11-12 10:34:36.296638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:47.572 [2024-11-12 10:34:36.296671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:47.572 [2024-11-12 10:34:36.296690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:47.572 [2024-11-12 10:34:36.296700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:47.572 passed 00:12:47.572 Test: blockdev nvme passthru rw ...passed 00:12:47.572 Test: blockdev nvme passthru vendor specific ...[2024-11-12 10:34:36.297824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:47.572 [2024-11-12 10:34:36.297935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:47.572 [2024-11-12 10:34:36.298244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:47.572 [2024-11-12 10:34:36.298283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:47.572 [2024-11-12 10:34:36.298486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:47.572 [2024-11-12 10:34:36.298741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:47.572 [2024-11-12 10:34:36.298954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:47.572 [2024-11-12 10:34:36.299075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:47.572 passed 00:12:47.572 Test: blockdev nvme admin passthru ...passed 00:12:47.572 Test: blockdev copy ...passed 00:12:47.572 00:12:47.572 Run Summary: Type Total Ran Passed Failed Inactive 00:12:47.572 suites 1 1 n/a 0 0 00:12:47.572 tests 23 23 23 0 0 00:12:47.572 asserts 152 152 152 0 n/a 00:12:47.572 00:12:47.572 Elapsed time = 0.172 seconds 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.140 rmmod nvme_tcp 00:12:48.140 rmmod nvme_fabrics 00:12:48.140 rmmod nvme_keyring 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70548 ']' 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70548 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 70548 ']' 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 70548 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70548 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:12:48.140 killing process with pid 70548 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70548' 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 70548 00:12:48.140 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 70548 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:48.399 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:12:48.659 00:12:48.659 real 0m3.433s 00:12:48.659 user 0m10.497s 00:12:48.659 sys 0m1.248s 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:48.659 ************************************ 00:12:48.659 END TEST nvmf_bdevio_no_huge 00:12:48.659 ************************************ 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.659 ************************************ 00:12:48.659 START TEST nvmf_tls 00:12:48.659 ************************************ 00:12:48.659 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:48.919 * Looking for test storage... 00:12:48.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.919 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:48.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.920 --rc genhtml_branch_coverage=1 00:12:48.920 --rc genhtml_function_coverage=1 00:12:48.920 --rc genhtml_legend=1 00:12:48.920 --rc geninfo_all_blocks=1 00:12:48.920 --rc geninfo_unexecuted_blocks=1 00:12:48.920 00:12:48.920 ' 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:48.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.920 --rc genhtml_branch_coverage=1 00:12:48.920 --rc genhtml_function_coverage=1 00:12:48.920 --rc genhtml_legend=1 00:12:48.920 --rc geninfo_all_blocks=1 00:12:48.920 --rc geninfo_unexecuted_blocks=1 00:12:48.920 00:12:48.920 ' 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:48.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.920 --rc genhtml_branch_coverage=1 00:12:48.920 --rc genhtml_function_coverage=1 00:12:48.920 --rc genhtml_legend=1 00:12:48.920 --rc geninfo_all_blocks=1 00:12:48.920 --rc geninfo_unexecuted_blocks=1 00:12:48.920 00:12:48.920 ' 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:48.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.920 --rc genhtml_branch_coverage=1 00:12:48.920 --rc genhtml_function_coverage=1 00:12:48.920 --rc genhtml_legend=1 00:12:48.920 --rc geninfo_all_blocks=1 00:12:48.920 --rc geninfo_unexecuted_blocks=1 00:12:48.920 00:12:48.920 ' 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:48.920 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:48.920 Cannot find device "nvmf_init_br" 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:48.920 Cannot find device "nvmf_init_br2" 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:48.920 Cannot find device "nvmf_tgt_br" 00:12:48.920 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:12:48.921 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.921 Cannot find device "nvmf_tgt_br2" 00:12:48.921 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:12:48.921 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:48.921 Cannot find device "nvmf_init_br" 00:12:48.921 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:12:48.921 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:49.202 Cannot find device "nvmf_init_br2" 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:49.202 Cannot find device "nvmf_tgt_br" 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:49.202 Cannot find device "nvmf_tgt_br2" 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:49.202 Cannot find device "nvmf_br" 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:49.202 Cannot find device "nvmf_init_if" 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:49.202 Cannot find device "nvmf_init_if2" 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:49.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:12:49.202 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:49.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:49.203 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:49.469 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:49.469 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:49.469 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:49.469 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:49.469 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:49.469 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:49.469 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:49.469 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:49.469 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:49.469 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:12:49.469 00:12:49.469 --- 10.0.0.3 ping statistics --- 00:12:49.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.469 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:49.469 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:49.469 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:49.469 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:12:49.469 00:12:49.469 --- 10.0.0.4 ping statistics --- 00:12:49.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.469 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:49.469 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:49.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:49.469 00:12:49.469 --- 10.0.0.1 ping statistics --- 00:12:49.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.469 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:49.469 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:49.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:12:49.469 00:12:49.469 --- 10.0.0.2 ping statistics --- 00:12:49.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.469 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:49.469 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:49.469 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70827 00:12:49.470 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:49.470 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70827 00:12:49.470 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70827 ']' 00:12:49.470 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.470 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:49.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.470 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.470 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:49.470 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:49.470 [2024-11-12 10:34:38.084236] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:12:49.470 [2024-11-12 10:34:38.084334] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.729 [2024-11-12 10:34:38.238794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.729 [2024-11-12 10:34:38.277144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.729 [2024-11-12 10:34:38.277228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.729 [2024-11-12 10:34:38.277242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.729 [2024-11-12 10:34:38.277252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.729 [2024-11-12 10:34:38.277261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.729 [2024-11-12 10:34:38.277659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.298 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:50.298 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:12:50.298 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.298 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:50.298 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:50.298 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.298 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:12:50.298 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:50.557 true 00:12:50.557 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:50.557 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:12:50.816 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:12:50.816 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:12:50.816 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:51.075 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:51.075 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:12:51.643 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:12:51.643 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:12:51.643 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:51.643 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:51.643 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:12:52.210 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:12:52.210 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:12:52.210 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:12:52.210 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:52.210 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:12:52.210 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:12:52.210 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:52.778 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:52.778 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:12:52.778 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:12:52.778 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:12:52.778 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:53.347 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:53.347 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:12:53.347 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:12:53.347 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:12:53.347 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:53.347 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:53.347 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:53.347 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:53.347 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:12:53.347 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:53.347 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.YgMyJpVa4m 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.VlN36E5DlV 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.YgMyJpVa4m 00:12:53.605 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.VlN36E5DlV 00:12:53.606 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:53.864 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:54.124 [2024-11-12 10:34:42.689858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:54.124 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.YgMyJpVa4m 00:12:54.124 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YgMyJpVa4m 00:12:54.124 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:54.384 [2024-11-12 10:34:42.943448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.384 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:54.643 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:54.902 [2024-11-12 10:34:43.403623] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:54.902 [2024-11-12 10:34:43.403840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:54.902 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:54.902 malloc0 00:12:54.902 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:55.469 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YgMyJpVa4m 00:12:55.469 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:55.729 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.YgMyJpVa4m 00:13:07.938 Initializing NVMe Controllers 00:13:07.938 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:07.938 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:07.938 Initialization complete. Launching workers. 00:13:07.938 ======================================================== 00:13:07.938 Latency(us) 00:13:07.938 Device Information : IOPS MiB/s Average min max 00:13:07.938 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10738.95 41.95 5960.60 1617.05 7842.39 00:13:07.938 ======================================================== 00:13:07.938 Total : 10738.95 41.95 5960.60 1617.05 7842.39 00:13:07.938 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YgMyJpVa4m 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YgMyJpVa4m 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71067 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71067 /var/tmp/bdevperf.sock 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71067 ']' 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:07.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:07.938 [2024-11-12 10:34:54.654636] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:07.938 [2024-11-12 10:34:54.654743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71067 ] 00:13:07.938 [2024-11-12 10:34:54.807591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.938 [2024-11-12 10:34:54.847057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.938 [2024-11-12 10:34:54.880897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:07.938 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YgMyJpVa4m 00:13:07.938 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:07.938 [2024-11-12 10:34:55.435769] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:07.938 TLSTESTn1 00:13:07.938 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:07.938 Running I/O for 10 seconds... 00:13:09.317 4352.00 IOPS, 17.00 MiB/s [2024-11-12T10:34:59.012Z] 4528.00 IOPS, 17.69 MiB/s [2024-11-12T10:34:59.985Z] 4569.00 IOPS, 17.85 MiB/s [2024-11-12T10:35:00.930Z] 4570.00 IOPS, 17.85 MiB/s [2024-11-12T10:35:01.867Z] 4560.00 IOPS, 17.81 MiB/s [2024-11-12T10:35:02.804Z] 4569.17 IOPS, 17.85 MiB/s [2024-11-12T10:35:03.740Z] 4582.29 IOPS, 17.90 MiB/s [2024-11-12T10:35:04.677Z] 4593.50 IOPS, 17.94 MiB/s [2024-11-12T10:35:06.054Z] 4600.33 IOPS, 17.97 MiB/s [2024-11-12T10:35:06.054Z] 4618.90 IOPS, 18.04 MiB/s 00:13:17.296 Latency(us) 00:13:17.296 [2024-11-12T10:35:06.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.296 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:17.296 Verification LBA range: start 0x0 length 0x2000 00:13:17.296 TLSTESTn1 : 10.01 4624.49 18.06 0.00 0.00 27632.08 4617.31 21567.30 00:13:17.296 [2024-11-12T10:35:06.054Z] =================================================================================================================== 00:13:17.296 [2024-11-12T10:35:06.055Z] Total : 4624.49 18.06 0.00 0.00 27632.08 4617.31 21567.30 00:13:17.297 { 00:13:17.297 "results": [ 00:13:17.297 { 00:13:17.297 "job": "TLSTESTn1", 00:13:17.297 "core_mask": "0x4", 00:13:17.297 "workload": "verify", 00:13:17.297 "status": "finished", 00:13:17.297 "verify_range": { 00:13:17.297 "start": 0, 00:13:17.297 "length": 8192 00:13:17.297 }, 00:13:17.297 "queue_depth": 128, 00:13:17.297 "io_size": 4096, 00:13:17.297 "runtime": 10.014934, 00:13:17.297 "iops": 4624.493780987474, 00:13:17.297 "mibps": 18.06442883198232, 00:13:17.297 "io_failed": 0, 00:13:17.297 "io_timeout": 0, 00:13:17.297 "avg_latency_us": 27632.07939229057, 00:13:17.297 "min_latency_us": 4617.309090909091, 00:13:17.297 "max_latency_us": 21567.30181818182 00:13:17.297 } 00:13:17.297 ], 00:13:17.297 "core_count": 1 00:13:17.297 } 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71067 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71067 ']' 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71067 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71067 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:17.297 killing process with pid 71067 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71067' 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71067 00:13:17.297 Received shutdown signal, test time was about 10.000000 seconds 00:13:17.297 00:13:17.297 Latency(us) 00:13:17.297 [2024-11-12T10:35:06.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.297 [2024-11-12T10:35:06.055Z] =================================================================================================================== 00:13:17.297 [2024-11-12T10:35:06.055Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71067 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VlN36E5DlV 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VlN36E5DlV 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VlN36E5DlV 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VlN36E5DlV 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71194 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71194 /var/tmp/bdevperf.sock 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71194 ']' 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:17.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:17.297 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.297 [2024-11-12 10:35:05.912327] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:17.297 [2024-11-12 10:35:05.912435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71194 ] 00:13:17.297 [2024-11-12 10:35:06.050386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.556 [2024-11-12 10:35:06.079551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.556 [2024-11-12 10:35:06.107412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:17.556 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:17.556 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:17.556 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VlN36E5DlV 00:13:17.815 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:18.074 [2024-11-12 10:35:06.693135] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:18.074 [2024-11-12 10:35:06.698571] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:18.074 [2024-11-12 10:35:06.698725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf26fb0 (107): Transport endpoint is not connected 00:13:18.074 [2024-11-12 10:35:06.699715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf26fb0 (9): Bad file descriptor 00:13:18.074 [2024-11-12 10:35:06.700711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:18.074 [2024-11-12 10:35:06.700751] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:18.074 [2024-11-12 10:35:06.700761] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:18.074 [2024-11-12 10:35:06.700770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:18.074 request: 00:13:18.074 { 00:13:18.074 "name": "TLSTEST", 00:13:18.074 "trtype": "tcp", 00:13:18.074 "traddr": "10.0.0.3", 00:13:18.074 "adrfam": "ipv4", 00:13:18.074 "trsvcid": "4420", 00:13:18.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.074 "prchk_reftag": false, 00:13:18.074 "prchk_guard": false, 00:13:18.074 "hdgst": false, 00:13:18.074 "ddgst": false, 00:13:18.074 "psk": "key0", 00:13:18.074 "allow_unrecognized_csi": false, 00:13:18.074 "method": "bdev_nvme_attach_controller", 00:13:18.074 "req_id": 1 00:13:18.074 } 00:13:18.074 Got JSON-RPC error response 00:13:18.074 response: 00:13:18.074 { 00:13:18.074 "code": -5, 00:13:18.074 "message": "Input/output error" 00:13:18.074 } 00:13:18.074 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71194 00:13:18.074 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71194 ']' 00:13:18.074 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71194 00:13:18.074 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:18.074 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:18.074 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71194 00:13:18.074 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:18.074 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:18.074 killing process with pid 71194 00:13:18.074 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71194' 00:13:18.074 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71194 00:13:18.075 Received shutdown signal, test time was about 10.000000 seconds 00:13:18.075 00:13:18.075 Latency(us) 00:13:18.075 [2024-11-12T10:35:06.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.075 [2024-11-12T10:35:06.833Z] =================================================================================================================== 00:13:18.075 [2024-11-12T10:35:06.833Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:18.075 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71194 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YgMyJpVa4m 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YgMyJpVa4m 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YgMyJpVa4m 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YgMyJpVa4m 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71215 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71215 /var/tmp/bdevperf.sock 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71215 ']' 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:18.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:18.334 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:18.334 [2024-11-12 10:35:06.937871] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:18.334 [2024-11-12 10:35:06.937955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71215 ] 00:13:18.334 [2024-11-12 10:35:07.070468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.593 [2024-11-12 10:35:07.100778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.593 [2024-11-12 10:35:07.128569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:18.593 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:18.593 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:18.593 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YgMyJpVa4m 00:13:18.852 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:19.111 [2024-11-12 10:35:07.701998] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:19.111 [2024-11-12 10:35:07.708588] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:19.111 [2024-11-12 10:35:07.708638] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:19.111 [2024-11-12 10:35:07.708683] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:19.111 [2024-11-12 10:35:07.709124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1621fb0 (107): Transport endpoint is not connected 00:13:19.111 [2024-11-12 10:35:07.710336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1621fb0 (9): Bad file descriptor 00:13:19.111 [2024-11-12 10:35:07.711113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:19.111 [2024-11-12 10:35:07.711175] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:19.111 [2024-11-12 10:35:07.711186] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:19.111 [2024-11-12 10:35:07.711208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:19.111 request: 00:13:19.111 { 00:13:19.111 "name": "TLSTEST", 00:13:19.111 "trtype": "tcp", 00:13:19.111 "traddr": "10.0.0.3", 00:13:19.111 "adrfam": "ipv4", 00:13:19.111 "trsvcid": "4420", 00:13:19.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:19.111 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:19.111 "prchk_reftag": false, 00:13:19.111 "prchk_guard": false, 00:13:19.111 "hdgst": false, 00:13:19.111 "ddgst": false, 00:13:19.111 "psk": "key0", 00:13:19.111 "allow_unrecognized_csi": false, 00:13:19.111 "method": "bdev_nvme_attach_controller", 00:13:19.111 "req_id": 1 00:13:19.111 } 00:13:19.111 Got JSON-RPC error response 00:13:19.111 response: 00:13:19.111 { 00:13:19.111 "code": -5, 00:13:19.111 "message": "Input/output error" 00:13:19.111 } 00:13:19.111 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71215 00:13:19.111 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71215 ']' 00:13:19.111 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71215 00:13:19.111 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:19.111 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:19.111 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71215 00:13:19.111 killing process with pid 71215 00:13:19.111 Received shutdown signal, test time was about 10.000000 seconds 00:13:19.111 00:13:19.111 Latency(us) 00:13:19.111 [2024-11-12T10:35:07.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.111 [2024-11-12T10:35:07.869Z] =================================================================================================================== 00:13:19.111 [2024-11-12T10:35:07.869Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:19.111 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:19.111 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:19.111 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71215' 00:13:19.111 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71215 00:13:19.111 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71215 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YgMyJpVa4m 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YgMyJpVa4m 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YgMyJpVa4m 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YgMyJpVa4m 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71235 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:19.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71235 /var/tmp/bdevperf.sock 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71235 ']' 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:19.371 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.371 [2024-11-12 10:35:07.958738] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:19.371 [2024-11-12 10:35:07.958851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71235 ] 00:13:19.371 [2024-11-12 10:35:08.104830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.629 [2024-11-12 10:35:08.135562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.629 [2024-11-12 10:35:08.163681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:20.197 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:20.197 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:20.197 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YgMyJpVa4m 00:13:20.456 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:20.715 [2024-11-12 10:35:09.373080] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:20.715 [2024-11-12 10:35:09.378891] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:20.715 [2024-11-12 10:35:09.378930] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:20.715 [2024-11-12 10:35:09.378993] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:20.715 [2024-11-12 10:35:09.379784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d10fb0 (107): Transport endpoint is not connected 00:13:20.715 [2024-11-12 10:35:09.380777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d10fb0 (9): Bad file descriptor 00:13:20.715 [2024-11-12 10:35:09.381774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:20.715 [2024-11-12 10:35:09.381803] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:20.715 [2024-11-12 10:35:09.381830] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:20.715 [2024-11-12 10:35:09.381856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:20.715 request: 00:13:20.715 { 00:13:20.715 "name": "TLSTEST", 00:13:20.715 "trtype": "tcp", 00:13:20.715 "traddr": "10.0.0.3", 00:13:20.715 "adrfam": "ipv4", 00:13:20.715 "trsvcid": "4420", 00:13:20.715 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:20.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:20.715 "prchk_reftag": false, 00:13:20.715 "prchk_guard": false, 00:13:20.715 "hdgst": false, 00:13:20.715 "ddgst": false, 00:13:20.715 "psk": "key0", 00:13:20.715 "allow_unrecognized_csi": false, 00:13:20.715 "method": "bdev_nvme_attach_controller", 00:13:20.715 "req_id": 1 00:13:20.715 } 00:13:20.715 Got JSON-RPC error response 00:13:20.715 response: 00:13:20.715 { 00:13:20.715 "code": -5, 00:13:20.715 "message": "Input/output error" 00:13:20.715 } 00:13:20.715 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71235 00:13:20.715 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71235 ']' 00:13:20.715 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71235 00:13:20.715 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:20.715 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:20.715 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71235 00:13:20.715 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:20.715 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:20.715 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71235' 00:13:20.715 killing process with pid 71235 00:13:20.715 Received shutdown signal, test time was about 10.000000 seconds 00:13:20.715 00:13:20.715 Latency(us) 00:13:20.715 [2024-11-12T10:35:09.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.715 [2024-11-12T10:35:09.473Z] =================================================================================================================== 00:13:20.715 [2024-11-12T10:35:09.473Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:20.715 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71235 00:13:20.715 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71235 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71265 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71265 /var/tmp/bdevperf.sock 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71265 ']' 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:20.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:20.975 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.975 [2024-11-12 10:35:09.620643] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:20.975 [2024-11-12 10:35:09.620897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71265 ] 00:13:21.234 [2024-11-12 10:35:09.764408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.234 [2024-11-12 10:35:09.793990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.234 [2024-11-12 10:35:09.822010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:21.234 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:21.234 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:21.234 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:21.493 [2024-11-12 10:35:10.151807] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:21.493 [2024-11-12 10:35:10.151866] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:21.493 request: 00:13:21.493 { 00:13:21.493 "name": "key0", 00:13:21.493 "path": "", 00:13:21.493 "method": "keyring_file_add_key", 00:13:21.493 "req_id": 1 00:13:21.493 } 00:13:21.493 Got JSON-RPC error response 00:13:21.493 response: 00:13:21.493 { 00:13:21.493 "code": -1, 00:13:21.493 "message": "Operation not permitted" 00:13:21.493 } 00:13:21.493 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:21.751 [2024-11-12 10:35:10.403981] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:21.751 [2024-11-12 10:35:10.404626] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:21.751 request: 00:13:21.751 { 00:13:21.751 "name": "TLSTEST", 00:13:21.751 "trtype": "tcp", 00:13:21.751 "traddr": "10.0.0.3", 00:13:21.751 "adrfam": "ipv4", 00:13:21.751 "trsvcid": "4420", 00:13:21.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:21.751 "prchk_reftag": false, 00:13:21.751 "prchk_guard": false, 00:13:21.751 "hdgst": false, 00:13:21.751 "ddgst": false, 00:13:21.751 "psk": "key0", 00:13:21.751 "allow_unrecognized_csi": false, 00:13:21.751 "method": "bdev_nvme_attach_controller", 00:13:21.751 "req_id": 1 00:13:21.751 } 00:13:21.751 Got JSON-RPC error response 00:13:21.751 response: 00:13:21.752 { 00:13:21.752 "code": -126, 00:13:21.752 "message": "Required key not available" 00:13:21.752 } 00:13:21.752 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71265 00:13:21.752 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71265 ']' 00:13:21.752 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71265 00:13:21.752 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:21.752 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:21.752 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71265 00:13:21.752 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:21.752 killing process with pid 71265 00:13:21.752 Received shutdown signal, test time was about 10.000000 seconds 00:13:21.752 00:13:21.752 Latency(us) 00:13:21.752 [2024-11-12T10:35:10.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.752 [2024-11-12T10:35:10.510Z] =================================================================================================================== 00:13:21.752 [2024-11-12T10:35:10.510Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:21.752 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:21.752 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71265' 00:13:21.752 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71265 00:13:21.752 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71265 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 70827 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70827 ']' 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70827 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70827 00:13:22.011 killing process with pid 70827 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70827' 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70827 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70827 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:22.011 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.SCyMkIOFpD 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.SCyMkIOFpD 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71296 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71296 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71296 ']' 00:13:22.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:22.271 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.271 [2024-11-12 10:35:10.871341] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:22.271 [2024-11-12 10:35:10.872156] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.271 [2024-11-12 10:35:11.027023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.547 [2024-11-12 10:35:11.064932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.547 [2024-11-12 10:35:11.065316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.547 [2024-11-12 10:35:11.065575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.547 [2024-11-12 10:35:11.065731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.547 [2024-11-12 10:35:11.065880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.547 [2024-11-12 10:35:11.066313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.547 [2024-11-12 10:35:11.098646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:23.116 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:23.116 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:23.116 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:23.116 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:23.116 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.116 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.116 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.SCyMkIOFpD 00:13:23.116 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SCyMkIOFpD 00:13:23.116 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:23.375 [2024-11-12 10:35:12.115844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.634 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:23.634 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:23.893 [2024-11-12 10:35:12.639960] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:23.893 [2024-11-12 10:35:12.640428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:24.152 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:24.152 malloc0 00:13:24.152 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:24.411 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SCyMkIOFpD 00:13:24.671 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SCyMkIOFpD 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SCyMkIOFpD 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71357 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71357 /var/tmp/bdevperf.sock 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71357 ']' 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:24.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:24.932 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.191 [2024-11-12 10:35:13.691008] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:25.191 [2024-11-12 10:35:13.691338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71357 ] 00:13:25.191 [2024-11-12 10:35:13.837913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.191 [2024-11-12 10:35:13.877060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.191 [2024-11-12 10:35:13.910925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:26.129 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:26.129 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:26.129 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SCyMkIOFpD 00:13:26.388 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:26.388 [2024-11-12 10:35:15.100311] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:26.647 TLSTESTn1 00:13:26.647 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:26.647 Running I/O for 10 seconds... 00:13:29.006 4480.00 IOPS, 17.50 MiB/s [2024-11-12T10:35:18.342Z] 4476.00 IOPS, 17.48 MiB/s [2024-11-12T10:35:19.720Z] 4451.00 IOPS, 17.39 MiB/s [2024-11-12T10:35:20.657Z] 4504.25 IOPS, 17.59 MiB/s [2024-11-12T10:35:21.594Z] 4531.20 IOPS, 17.70 MiB/s [2024-11-12T10:35:22.530Z] 4538.00 IOPS, 17.73 MiB/s [2024-11-12T10:35:23.466Z] 4540.00 IOPS, 17.73 MiB/s [2024-11-12T10:35:24.403Z] 4547.75 IOPS, 17.76 MiB/s [2024-11-12T10:35:25.339Z] 4554.00 IOPS, 17.79 MiB/s [2024-11-12T10:35:25.598Z] 4554.90 IOPS, 17.79 MiB/s 00:13:36.840 Latency(us) 00:13:36.840 [2024-11-12T10:35:25.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.840 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:36.840 Verification LBA range: start 0x0 length 0x2000 00:13:36.840 TLSTESTn1 : 10.02 4560.19 17.81 0.00 0.00 28019.17 5898.24 21686.46 00:13:36.840 [2024-11-12T10:35:25.598Z] =================================================================================================================== 00:13:36.840 [2024-11-12T10:35:25.598Z] Total : 4560.19 17.81 0.00 0.00 28019.17 5898.24 21686.46 00:13:36.840 { 00:13:36.840 "results": [ 00:13:36.840 { 00:13:36.840 "job": "TLSTESTn1", 00:13:36.840 "core_mask": "0x4", 00:13:36.840 "workload": "verify", 00:13:36.840 "status": "finished", 00:13:36.840 "verify_range": { 00:13:36.840 "start": 0, 00:13:36.840 "length": 8192 00:13:36.840 }, 00:13:36.840 "queue_depth": 128, 00:13:36.840 "io_size": 4096, 00:13:36.840 "runtime": 10.01624, 00:13:36.840 "iops": 4560.194244546856, 00:13:36.840 "mibps": 17.813258767761155, 00:13:36.840 "io_failed": 0, 00:13:36.840 "io_timeout": 0, 00:13:36.840 "avg_latency_us": 28019.16949008431, 00:13:36.840 "min_latency_us": 5898.24, 00:13:36.840 "max_latency_us": 21686.458181818183 00:13:36.840 } 00:13:36.840 ], 00:13:36.840 "core_count": 1 00:13:36.840 } 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71357 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71357 ']' 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71357 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71357 00:13:36.840 killing process with pid 71357 00:13:36.840 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.840 00:13:36.840 Latency(us) 00:13:36.840 [2024-11-12T10:35:25.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.840 [2024-11-12T10:35:25.598Z] =================================================================================================================== 00:13:36.840 [2024-11-12T10:35:25.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71357' 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71357 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71357 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.SCyMkIOFpD 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SCyMkIOFpD 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SCyMkIOFpD 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SCyMkIOFpD 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:36.840 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:36.841 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SCyMkIOFpD 00:13:36.841 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:36.841 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71487 00:13:36.841 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.841 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:36.841 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71487 /var/tmp/bdevperf.sock 00:13:36.841 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71487 ']' 00:13:36.841 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:36.841 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:36.841 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:36.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:36.841 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:36.841 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:36.841 [2024-11-12 10:35:25.588060] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:36.841 [2024-11-12 10:35:25.588468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71487 ] 00:13:37.100 [2024-11-12 10:35:25.738438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.100 [2024-11-12 10:35:25.768638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.100 [2024-11-12 10:35:25.796102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:37.100 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:37.100 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:37.100 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SCyMkIOFpD 00:13:37.668 [2024-11-12 10:35:26.117377] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SCyMkIOFpD': 0100666 00:13:37.668 [2024-11-12 10:35:26.117418] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:37.668 request: 00:13:37.668 { 00:13:37.668 "name": "key0", 00:13:37.668 "path": "/tmp/tmp.SCyMkIOFpD", 00:13:37.668 "method": "keyring_file_add_key", 00:13:37.668 "req_id": 1 00:13:37.668 } 00:13:37.668 Got JSON-RPC error response 00:13:37.668 response: 00:13:37.668 { 00:13:37.668 "code": -1, 00:13:37.668 "message": "Operation not permitted" 00:13:37.668 } 00:13:37.668 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:37.668 [2024-11-12 10:35:26.361555] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:37.668 [2024-11-12 10:35:26.361633] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:37.668 request: 00:13:37.668 { 00:13:37.668 "name": "TLSTEST", 00:13:37.668 "trtype": "tcp", 00:13:37.668 "traddr": "10.0.0.3", 00:13:37.668 "adrfam": "ipv4", 00:13:37.668 "trsvcid": "4420", 00:13:37.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:37.668 "prchk_reftag": false, 00:13:37.668 "prchk_guard": false, 00:13:37.668 "hdgst": false, 00:13:37.668 "ddgst": false, 00:13:37.668 "psk": "key0", 00:13:37.668 "allow_unrecognized_csi": false, 00:13:37.668 "method": "bdev_nvme_attach_controller", 00:13:37.668 "req_id": 1 00:13:37.668 } 00:13:37.668 Got JSON-RPC error response 00:13:37.668 response: 00:13:37.668 { 00:13:37.668 "code": -126, 00:13:37.668 "message": "Required key not available" 00:13:37.668 } 00:13:37.668 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71487 00:13:37.668 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71487 ']' 00:13:37.668 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71487 00:13:37.668 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:37.668 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:37.668 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71487 00:13:37.668 killing process with pid 71487 00:13:37.668 Received shutdown signal, test time was about 10.000000 seconds 00:13:37.668 00:13:37.668 Latency(us) 00:13:37.668 [2024-11-12T10:35:26.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.668 [2024-11-12T10:35:26.426Z] =================================================================================================================== 00:13:37.668 [2024-11-12T10:35:26.426Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:37.668 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:37.668 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:37.668 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71487' 00:13:37.668 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71487 00:13:37.668 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71487 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71296 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71296 ']' 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71296 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71296 00:13:37.945 killing process with pid 71296 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71296' 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71296 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71296 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71513 00:13:37.945 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71513 00:13:38.204 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71513 ']' 00:13:38.204 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.204 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:38.204 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.204 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:38.204 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.204 [2024-11-12 10:35:26.769838] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:38.204 [2024-11-12 10:35:26.769940] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.204 [2024-11-12 10:35:26.918962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.204 [2024-11-12 10:35:26.948581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.204 [2024-11-12 10:35:26.948646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.204 [2024-11-12 10:35:26.948671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.204 [2024-11-12 10:35:26.948678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.204 [2024-11-12 10:35:26.948684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.204 [2024-11-12 10:35:26.948934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.463 [2024-11-12 10:35:26.975856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:38.463 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:38.463 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:38.463 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:38.463 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:38.463 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.463 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.463 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.SCyMkIOFpD 00:13:38.463 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:38.463 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.SCyMkIOFpD 00:13:38.464 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:13:38.464 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:38.464 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:13:38.464 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:38.464 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.SCyMkIOFpD 00:13:38.464 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SCyMkIOFpD 00:13:38.464 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:38.722 [2024-11-12 10:35:27.280679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.722 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:38.981 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:39.240 [2024-11-12 10:35:27.812764] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:39.240 [2024-11-12 10:35:27.812973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:39.240 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:39.499 malloc0 00:13:39.499 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:39.758 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SCyMkIOFpD 00:13:40.017 [2024-11-12 10:35:28.518856] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SCyMkIOFpD': 0100666 00:13:40.017 [2024-11-12 10:35:28.518951] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:40.017 request: 00:13:40.017 { 00:13:40.017 "name": "key0", 00:13:40.017 "path": "/tmp/tmp.SCyMkIOFpD", 00:13:40.017 "method": "keyring_file_add_key", 00:13:40.017 "req_id": 1 00:13:40.017 } 00:13:40.017 Got JSON-RPC error response 00:13:40.017 response: 00:13:40.017 { 00:13:40.017 "code": -1, 00:13:40.017 "message": "Operation not permitted" 00:13:40.017 } 00:13:40.017 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:40.017 [2024-11-12 10:35:28.758925] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:13:40.017 [2024-11-12 10:35:28.759006] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:40.017 request: 00:13:40.017 { 00:13:40.017 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:40.017 "host": "nqn.2016-06.io.spdk:host1", 00:13:40.017 "psk": "key0", 00:13:40.017 "method": "nvmf_subsystem_add_host", 00:13:40.017 "req_id": 1 00:13:40.017 } 00:13:40.017 Got JSON-RPC error response 00:13:40.017 response: 00:13:40.017 { 00:13:40.017 "code": -32603, 00:13:40.017 "message": "Internal error" 00:13:40.017 } 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71513 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71513 ']' 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71513 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71513 00:13:40.276 killing process with pid 71513 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71513' 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71513 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71513 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.SCyMkIOFpD 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71574 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71574 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71574 ']' 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:40.276 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.276 [2024-11-12 10:35:29.022468] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:40.276 [2024-11-12 10:35:29.022567] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.536 [2024-11-12 10:35:29.167924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.536 [2024-11-12 10:35:29.194938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.536 [2024-11-12 10:35:29.195288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.536 [2024-11-12 10:35:29.195308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.536 [2024-11-12 10:35:29.195317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.536 [2024-11-12 10:35:29.195323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.536 [2024-11-12 10:35:29.195649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.536 [2024-11-12 10:35:29.222318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:40.536 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:40.536 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:40.536 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:40.536 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:40.536 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.794 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.794 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.SCyMkIOFpD 00:13:40.794 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SCyMkIOFpD 00:13:40.794 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:41.052 [2024-11-12 10:35:29.583320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.053 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:41.310 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:41.568 [2024-11-12 10:35:30.103393] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:41.568 [2024-11-12 10:35:30.103737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:41.569 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:41.569 malloc0 00:13:41.827 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:41.827 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SCyMkIOFpD 00:13:42.086 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:42.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.345 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71623 00:13:42.346 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:42.346 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:42.346 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71623 /var/tmp/bdevperf.sock 00:13:42.346 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71623 ']' 00:13:42.346 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.346 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:42.346 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.346 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:42.346 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.346 [2024-11-12 10:35:31.100043] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:42.346 [2024-11-12 10:35:31.100377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71623 ] 00:13:42.605 [2024-11-12 10:35:31.249976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.605 [2024-11-12 10:35:31.290241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.605 [2024-11-12 10:35:31.324778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:43.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:43.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:43.543 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SCyMkIOFpD 00:13:43.802 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:43.802 [2024-11-12 10:35:32.530334] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:44.061 TLSTESTn1 00:13:44.061 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:44.321 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:13:44.321 "subsystems": [ 00:13:44.321 { 00:13:44.321 "subsystem": "keyring", 00:13:44.321 "config": [ 00:13:44.322 { 00:13:44.322 "method": "keyring_file_add_key", 00:13:44.322 "params": { 00:13:44.322 "name": "key0", 00:13:44.322 "path": "/tmp/tmp.SCyMkIOFpD" 00:13:44.322 } 00:13:44.322 } 00:13:44.322 ] 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "subsystem": "iobuf", 00:13:44.322 "config": [ 00:13:44.322 { 00:13:44.322 "method": "iobuf_set_options", 00:13:44.322 "params": { 00:13:44.322 "small_pool_count": 8192, 00:13:44.322 "large_pool_count": 1024, 00:13:44.322 "small_bufsize": 8192, 00:13:44.322 "large_bufsize": 135168, 00:13:44.322 "enable_numa": false 00:13:44.322 } 00:13:44.322 } 00:13:44.322 ] 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "subsystem": "sock", 00:13:44.322 "config": [ 00:13:44.322 { 00:13:44.322 "method": "sock_set_default_impl", 00:13:44.322 "params": { 00:13:44.322 "impl_name": "uring" 00:13:44.322 } 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "method": "sock_impl_set_options", 00:13:44.322 "params": { 00:13:44.322 "impl_name": "ssl", 00:13:44.322 "recv_buf_size": 4096, 00:13:44.322 "send_buf_size": 4096, 00:13:44.322 "enable_recv_pipe": true, 00:13:44.322 "enable_quickack": false, 00:13:44.322 "enable_placement_id": 0, 00:13:44.322 "enable_zerocopy_send_server": true, 00:13:44.322 "enable_zerocopy_send_client": false, 00:13:44.322 "zerocopy_threshold": 0, 00:13:44.322 "tls_version": 0, 00:13:44.322 "enable_ktls": false 00:13:44.322 } 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "method": "sock_impl_set_options", 00:13:44.322 "params": { 00:13:44.322 "impl_name": "posix", 00:13:44.322 "recv_buf_size": 2097152, 00:13:44.322 "send_buf_size": 2097152, 00:13:44.322 "enable_recv_pipe": true, 00:13:44.322 "enable_quickack": false, 00:13:44.322 "enable_placement_id": 0, 00:13:44.322 "enable_zerocopy_send_server": true, 00:13:44.322 "enable_zerocopy_send_client": false, 00:13:44.322 "zerocopy_threshold": 0, 00:13:44.322 "tls_version": 0, 00:13:44.322 "enable_ktls": false 00:13:44.322 } 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "method": "sock_impl_set_options", 00:13:44.322 "params": { 00:13:44.322 "impl_name": "uring", 00:13:44.322 "recv_buf_size": 2097152, 00:13:44.322 "send_buf_size": 2097152, 00:13:44.322 "enable_recv_pipe": true, 00:13:44.322 "enable_quickack": false, 00:13:44.322 "enable_placement_id": 0, 00:13:44.322 "enable_zerocopy_send_server": false, 00:13:44.322 "enable_zerocopy_send_client": false, 00:13:44.322 "zerocopy_threshold": 0, 00:13:44.322 "tls_version": 0, 00:13:44.322 "enable_ktls": false 00:13:44.322 } 00:13:44.322 } 00:13:44.322 ] 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "subsystem": "vmd", 00:13:44.322 "config": [] 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "subsystem": "accel", 00:13:44.322 "config": [ 00:13:44.322 { 00:13:44.322 "method": "accel_set_options", 00:13:44.322 "params": { 00:13:44.322 "small_cache_size": 128, 00:13:44.322 "large_cache_size": 16, 00:13:44.322 "task_count": 2048, 00:13:44.322 "sequence_count": 2048, 00:13:44.322 "buf_count": 2048 00:13:44.322 } 00:13:44.322 } 00:13:44.322 ] 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "subsystem": "bdev", 00:13:44.322 "config": [ 00:13:44.322 { 00:13:44.322 "method": "bdev_set_options", 00:13:44.322 "params": { 00:13:44.322 "bdev_io_pool_size": 65535, 00:13:44.322 "bdev_io_cache_size": 256, 00:13:44.322 "bdev_auto_examine": true, 00:13:44.322 "iobuf_small_cache_size": 128, 00:13:44.322 "iobuf_large_cache_size": 16 00:13:44.322 } 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "method": "bdev_raid_set_options", 00:13:44.322 "params": { 00:13:44.322 "process_window_size_kb": 1024, 00:13:44.322 "process_max_bandwidth_mb_sec": 0 00:13:44.322 } 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "method": "bdev_iscsi_set_options", 00:13:44.322 "params": { 00:13:44.322 "timeout_sec": 30 00:13:44.322 } 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "method": "bdev_nvme_set_options", 00:13:44.322 "params": { 00:13:44.322 "action_on_timeout": "none", 00:13:44.322 "timeout_us": 0, 00:13:44.322 "timeout_admin_us": 0, 00:13:44.322 "keep_alive_timeout_ms": 10000, 00:13:44.322 "arbitration_burst": 0, 00:13:44.322 "low_priority_weight": 0, 00:13:44.322 "medium_priority_weight": 0, 00:13:44.322 "high_priority_weight": 0, 00:13:44.322 "nvme_adminq_poll_period_us": 10000, 00:13:44.322 "nvme_ioq_poll_period_us": 0, 00:13:44.322 "io_queue_requests": 0, 00:13:44.322 "delay_cmd_submit": true, 00:13:44.322 "transport_retry_count": 4, 00:13:44.322 "bdev_retry_count": 3, 00:13:44.322 "transport_ack_timeout": 0, 00:13:44.322 "ctrlr_loss_timeout_sec": 0, 00:13:44.322 "reconnect_delay_sec": 0, 00:13:44.322 "fast_io_fail_timeout_sec": 0, 00:13:44.322 "disable_auto_failback": false, 00:13:44.322 "generate_uuids": false, 00:13:44.322 "transport_tos": 0, 00:13:44.322 "nvme_error_stat": false, 00:13:44.322 "rdma_srq_size": 0, 00:13:44.322 "io_path_stat": false, 00:13:44.322 "allow_accel_sequence": false, 00:13:44.322 "rdma_max_cq_size": 0, 00:13:44.322 "rdma_cm_event_timeout_ms": 0, 00:13:44.322 "dhchap_digests": [ 00:13:44.322 "sha256", 00:13:44.322 "sha384", 00:13:44.322 "sha512" 00:13:44.322 ], 00:13:44.322 "dhchap_dhgroups": [ 00:13:44.322 "null", 00:13:44.322 "ffdhe2048", 00:13:44.322 "ffdhe3072", 00:13:44.322 "ffdhe4096", 00:13:44.322 "ffdhe6144", 00:13:44.322 "ffdhe8192" 00:13:44.322 ] 00:13:44.322 } 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "method": "bdev_nvme_set_hotplug", 00:13:44.322 "params": { 00:13:44.322 "period_us": 100000, 00:13:44.322 "enable": false 00:13:44.322 } 00:13:44.322 }, 00:13:44.322 { 00:13:44.322 "method": "bdev_malloc_create", 00:13:44.322 "params": { 00:13:44.322 "name": "malloc0", 00:13:44.322 "num_blocks": 8192, 00:13:44.322 "block_size": 4096, 00:13:44.322 "physical_block_size": 4096, 00:13:44.322 "uuid": "e73a19a6-7ba6-4e6f-9469-e7e3f7091608", 00:13:44.322 "optimal_io_boundary": 0, 00:13:44.322 "md_size": 0, 00:13:44.322 "dif_type": 0, 00:13:44.322 "dif_is_head_of_md": false, 00:13:44.323 "dif_pi_format": 0 00:13:44.323 } 00:13:44.323 }, 00:13:44.323 { 00:13:44.323 "method": "bdev_wait_for_examine" 00:13:44.323 } 00:13:44.323 ] 00:13:44.323 }, 00:13:44.323 { 00:13:44.323 "subsystem": "nbd", 00:13:44.323 "config": [] 00:13:44.323 }, 00:13:44.323 { 00:13:44.323 "subsystem": "scheduler", 00:13:44.323 "config": [ 00:13:44.323 { 00:13:44.323 "method": "framework_set_scheduler", 00:13:44.323 "params": { 00:13:44.323 "name": "static" 00:13:44.323 } 00:13:44.323 } 00:13:44.323 ] 00:13:44.323 }, 00:13:44.323 { 00:13:44.323 "subsystem": "nvmf", 00:13:44.323 "config": [ 00:13:44.323 { 00:13:44.323 "method": "nvmf_set_config", 00:13:44.323 "params": { 00:13:44.323 "discovery_filter": "match_any", 00:13:44.323 "admin_cmd_passthru": { 00:13:44.323 "identify_ctrlr": false 00:13:44.323 }, 00:13:44.323 "dhchap_digests": [ 00:13:44.323 "sha256", 00:13:44.323 "sha384", 00:13:44.323 "sha512" 00:13:44.323 ], 00:13:44.323 "dhchap_dhgroups": [ 00:13:44.323 "null", 00:13:44.323 "ffdhe2048", 00:13:44.323 "ffdhe3072", 00:13:44.323 "ffdhe4096", 00:13:44.323 "ffdhe6144", 00:13:44.323 "ffdhe8192" 00:13:44.323 ] 00:13:44.323 } 00:13:44.323 }, 00:13:44.323 { 00:13:44.323 "method": "nvmf_set_max_subsystems", 00:13:44.323 "params": { 00:13:44.323 "max_subsystems": 1024 00:13:44.323 } 00:13:44.323 }, 00:13:44.323 { 00:13:44.323 "method": "nvmf_set_crdt", 00:13:44.323 "params": { 00:13:44.323 "crdt1": 0, 00:13:44.323 "crdt2": 0, 00:13:44.323 "crdt3": 0 00:13:44.323 } 00:13:44.323 }, 00:13:44.323 { 00:13:44.323 "method": "nvmf_create_transport", 00:13:44.323 "params": { 00:13:44.323 "trtype": "TCP", 00:13:44.323 "max_queue_depth": 128, 00:13:44.323 "max_io_qpairs_per_ctrlr": 127, 00:13:44.323 "in_capsule_data_size": 4096, 00:13:44.323 "max_io_size": 131072, 00:13:44.323 "io_unit_size": 131072, 00:13:44.323 "max_aq_depth": 128, 00:13:44.323 "num_shared_buffers": 511, 00:13:44.323 "buf_cache_size": 4294967295, 00:13:44.323 "dif_insert_or_strip": false, 00:13:44.323 "zcopy": false, 00:13:44.323 "c2h_success": false, 00:13:44.323 "sock_priority": 0, 00:13:44.323 "abort_timeout_sec": 1, 00:13:44.323 "ack_timeout": 0, 00:13:44.323 "data_wr_pool_size": 0 00:13:44.323 } 00:13:44.323 }, 00:13:44.323 { 00:13:44.323 "method": "nvmf_create_subsystem", 00:13:44.323 "params": { 00:13:44.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.323 "allow_any_host": false, 00:13:44.323 "serial_number": "SPDK00000000000001", 00:13:44.323 "model_number": "SPDK bdev Controller", 00:13:44.323 "max_namespaces": 10, 00:13:44.323 "min_cntlid": 1, 00:13:44.323 "max_cntlid": 65519, 00:13:44.323 "ana_reporting": false 00:13:44.323 } 00:13:44.323 }, 00:13:44.323 { 00:13:44.323 "method": "nvmf_subsystem_add_host", 00:13:44.323 "params": { 00:13:44.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.323 "host": "nqn.2016-06.io.spdk:host1", 00:13:44.323 "psk": "key0" 00:13:44.323 } 00:13:44.323 }, 00:13:44.323 { 00:13:44.323 "method": "nvmf_subsystem_add_ns", 00:13:44.323 "params": { 00:13:44.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.323 "namespace": { 00:13:44.323 "nsid": 1, 00:13:44.323 "bdev_name": "malloc0", 00:13:44.323 "nguid": "E73A19A67BA64E6F9469E7E3F7091608", 00:13:44.323 "uuid": "e73a19a6-7ba6-4e6f-9469-e7e3f7091608", 00:13:44.323 "no_auto_visible": false 00:13:44.323 } 00:13:44.323 } 00:13:44.323 }, 00:13:44.323 { 00:13:44.323 "method": "nvmf_subsystem_add_listener", 00:13:44.323 "params": { 00:13:44.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.323 "listen_address": { 00:13:44.323 "trtype": "TCP", 00:13:44.323 "adrfam": "IPv4", 00:13:44.323 "traddr": "10.0.0.3", 00:13:44.323 "trsvcid": "4420" 00:13:44.323 }, 00:13:44.323 "secure_channel": true 00:13:44.323 } 00:13:44.323 } 00:13:44.323 ] 00:13:44.323 } 00:13:44.323 ] 00:13:44.323 }' 00:13:44.323 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:44.583 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:13:44.583 "subsystems": [ 00:13:44.583 { 00:13:44.583 "subsystem": "keyring", 00:13:44.583 "config": [ 00:13:44.583 { 00:13:44.583 "method": "keyring_file_add_key", 00:13:44.583 "params": { 00:13:44.583 "name": "key0", 00:13:44.583 "path": "/tmp/tmp.SCyMkIOFpD" 00:13:44.583 } 00:13:44.583 } 00:13:44.583 ] 00:13:44.583 }, 00:13:44.583 { 00:13:44.583 "subsystem": "iobuf", 00:13:44.583 "config": [ 00:13:44.583 { 00:13:44.583 "method": "iobuf_set_options", 00:13:44.583 "params": { 00:13:44.583 "small_pool_count": 8192, 00:13:44.583 "large_pool_count": 1024, 00:13:44.583 "small_bufsize": 8192, 00:13:44.583 "large_bufsize": 135168, 00:13:44.583 "enable_numa": false 00:13:44.583 } 00:13:44.583 } 00:13:44.583 ] 00:13:44.583 }, 00:13:44.583 { 00:13:44.583 "subsystem": "sock", 00:13:44.583 "config": [ 00:13:44.583 { 00:13:44.583 "method": "sock_set_default_impl", 00:13:44.583 "params": { 00:13:44.583 "impl_name": "uring" 00:13:44.583 } 00:13:44.583 }, 00:13:44.583 { 00:13:44.583 "method": "sock_impl_set_options", 00:13:44.583 "params": { 00:13:44.583 "impl_name": "ssl", 00:13:44.583 "recv_buf_size": 4096, 00:13:44.583 "send_buf_size": 4096, 00:13:44.583 "enable_recv_pipe": true, 00:13:44.583 "enable_quickack": false, 00:13:44.583 "enable_placement_id": 0, 00:13:44.583 "enable_zerocopy_send_server": true, 00:13:44.583 "enable_zerocopy_send_client": false, 00:13:44.583 "zerocopy_threshold": 0, 00:13:44.583 "tls_version": 0, 00:13:44.584 "enable_ktls": false 00:13:44.584 } 00:13:44.584 }, 00:13:44.584 { 00:13:44.584 "method": "sock_impl_set_options", 00:13:44.584 "params": { 00:13:44.584 "impl_name": "posix", 00:13:44.584 "recv_buf_size": 2097152, 00:13:44.584 "send_buf_size": 2097152, 00:13:44.584 "enable_recv_pipe": true, 00:13:44.584 "enable_quickack": false, 00:13:44.584 "enable_placement_id": 0, 00:13:44.584 "enable_zerocopy_send_server": true, 00:13:44.584 "enable_zerocopy_send_client": false, 00:13:44.584 "zerocopy_threshold": 0, 00:13:44.584 "tls_version": 0, 00:13:44.584 "enable_ktls": false 00:13:44.584 } 00:13:44.584 }, 00:13:44.584 { 00:13:44.584 "method": "sock_impl_set_options", 00:13:44.584 "params": { 00:13:44.584 "impl_name": "uring", 00:13:44.584 "recv_buf_size": 2097152, 00:13:44.584 "send_buf_size": 2097152, 00:13:44.584 "enable_recv_pipe": true, 00:13:44.584 "enable_quickack": false, 00:13:44.584 "enable_placement_id": 0, 00:13:44.584 "enable_zerocopy_send_server": false, 00:13:44.584 "enable_zerocopy_send_client": false, 00:13:44.584 "zerocopy_threshold": 0, 00:13:44.584 "tls_version": 0, 00:13:44.584 "enable_ktls": false 00:13:44.584 } 00:13:44.584 } 00:13:44.584 ] 00:13:44.584 }, 00:13:44.584 { 00:13:44.584 "subsystem": "vmd", 00:13:44.584 "config": [] 00:13:44.584 }, 00:13:44.584 { 00:13:44.584 "subsystem": "accel", 00:13:44.584 "config": [ 00:13:44.584 { 00:13:44.584 "method": "accel_set_options", 00:13:44.584 "params": { 00:13:44.584 "small_cache_size": 128, 00:13:44.584 "large_cache_size": 16, 00:13:44.584 "task_count": 2048, 00:13:44.584 "sequence_count": 2048, 00:13:44.584 "buf_count": 2048 00:13:44.584 } 00:13:44.584 } 00:13:44.584 ] 00:13:44.584 }, 00:13:44.584 { 00:13:44.584 "subsystem": "bdev", 00:13:44.584 "config": [ 00:13:44.584 { 00:13:44.584 "method": "bdev_set_options", 00:13:44.584 "params": { 00:13:44.584 "bdev_io_pool_size": 65535, 00:13:44.584 "bdev_io_cache_size": 256, 00:13:44.584 "bdev_auto_examine": true, 00:13:44.584 "iobuf_small_cache_size": 128, 00:13:44.584 "iobuf_large_cache_size": 16 00:13:44.584 } 00:13:44.584 }, 00:13:44.584 { 00:13:44.584 "method": "bdev_raid_set_options", 00:13:44.584 "params": { 00:13:44.584 "process_window_size_kb": 1024, 00:13:44.584 "process_max_bandwidth_mb_sec": 0 00:13:44.584 } 00:13:44.584 }, 00:13:44.584 { 00:13:44.584 "method": "bdev_iscsi_set_options", 00:13:44.584 "params": { 00:13:44.584 "timeout_sec": 30 00:13:44.584 } 00:13:44.584 }, 00:13:44.584 { 00:13:44.584 "method": "bdev_nvme_set_options", 00:13:44.584 "params": { 00:13:44.584 "action_on_timeout": "none", 00:13:44.584 "timeout_us": 0, 00:13:44.584 "timeout_admin_us": 0, 00:13:44.584 "keep_alive_timeout_ms": 10000, 00:13:44.584 "arbitration_burst": 0, 00:13:44.584 "low_priority_weight": 0, 00:13:44.584 "medium_priority_weight": 0, 00:13:44.584 "high_priority_weight": 0, 00:13:44.584 "nvme_adminq_poll_period_us": 10000, 00:13:44.584 "nvme_ioq_poll_period_us": 0, 00:13:44.584 "io_queue_requests": 512, 00:13:44.584 "delay_cmd_submit": true, 00:13:44.584 "transport_retry_count": 4, 00:13:44.584 "bdev_retry_count": 3, 00:13:44.584 "transport_ack_timeout": 0, 00:13:44.584 "ctrlr_loss_timeout_sec": 0, 00:13:44.584 "reconnect_delay_sec": 0, 00:13:44.584 "fast_io_fail_timeout_sec": 0, 00:13:44.584 "disable_auto_failback": false, 00:13:44.584 "generate_uuids": false, 00:13:44.584 "transport_tos": 0, 00:13:44.584 "nvme_error_stat": false, 00:13:44.584 "rdma_srq_size": 0, 00:13:44.584 "io_path_stat": false, 00:13:44.584 "allow_accel_sequence": false, 00:13:44.584 "rdma_max_cq_size": 0, 00:13:44.584 "rdma_cm_event_timeout_ms": 0, 00:13:44.584 "dhchap_digests": [ 00:13:44.584 "sha256", 00:13:44.584 "sha384", 00:13:44.584 "sha512" 00:13:44.584 ], 00:13:44.584 "dhchap_dhgroups": [ 00:13:44.584 "null", 00:13:44.584 "ffdhe2048", 00:13:44.584 "ffdhe3072", 00:13:44.584 "ffdhe4096", 00:13:44.584 "ffdhe6144", 00:13:44.584 "ffdhe8192" 00:13:44.584 ] 00:13:44.584 } 00:13:44.584 }, 00:13:44.584 { 00:13:44.584 "method": "bdev_nvme_attach_controller", 00:13:44.584 "params": { 00:13:44.584 "name": "TLSTEST", 00:13:44.584 "trtype": "TCP", 00:13:44.584 "adrfam": "IPv4", 00:13:44.584 "traddr": "10.0.0.3", 00:13:44.584 "trsvcid": "4420", 00:13:44.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.584 "prchk_reftag": false, 00:13:44.584 "prchk_guard": false, 00:13:44.584 "ctrlr_loss_timeout_sec": 0, 00:13:44.584 "reconnect_delay_sec": 0, 00:13:44.584 "fast_io_fail_timeout_sec": 0, 00:13:44.584 "psk": "key0", 00:13:44.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:44.584 "hdgst": false, 00:13:44.584 "ddgst": false, 00:13:44.584 "multipath": "multipath" 00:13:44.584 } 00:13:44.584 }, 00:13:44.584 { 00:13:44.584 "method": "bdev_nvme_set_hotplug", 00:13:44.584 "params": { 00:13:44.584 "period_us": 100000, 00:13:44.584 "enable": false 00:13:44.584 } 00:13:44.584 }, 00:13:44.584 { 00:13:44.584 "method": "bdev_wait_for_examine" 00:13:44.584 } 00:13:44.584 ] 00:13:44.584 }, 00:13:44.584 { 00:13:44.584 "subsystem": "nbd", 00:13:44.584 "config": [] 00:13:44.584 } 00:13:44.584 ] 00:13:44.584 }' 00:13:44.584 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71623 00:13:44.584 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71623 ']' 00:13:44.584 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71623 00:13:44.584 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:44.584 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:44.584 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71623 00:13:44.844 killing process with pid 71623 00:13:44.844 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.844 00:13:44.844 Latency(us) 00:13:44.844 [2024-11-12T10:35:33.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.844 [2024-11-12T10:35:33.602Z] =================================================================================================================== 00:13:44.844 [2024-11-12T10:35:33.602Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71623' 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71623 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71623 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71574 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71574 ']' 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71574 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71574 00:13:44.844 killing process with pid 71574 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71574' 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71574 00:13:44.844 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71574 00:13:45.105 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:45.105 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:45.105 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:45.105 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.105 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:13:45.105 "subsystems": [ 00:13:45.105 { 00:13:45.105 "subsystem": "keyring", 00:13:45.105 "config": [ 00:13:45.105 { 00:13:45.105 "method": "keyring_file_add_key", 00:13:45.105 "params": { 00:13:45.105 "name": "key0", 00:13:45.105 "path": "/tmp/tmp.SCyMkIOFpD" 00:13:45.105 } 00:13:45.105 } 00:13:45.105 ] 00:13:45.105 }, 00:13:45.105 { 00:13:45.105 "subsystem": "iobuf", 00:13:45.105 "config": [ 00:13:45.105 { 00:13:45.105 "method": "iobuf_set_options", 00:13:45.105 "params": { 00:13:45.105 "small_pool_count": 8192, 00:13:45.105 "large_pool_count": 1024, 00:13:45.105 "small_bufsize": 8192, 00:13:45.105 "large_bufsize": 135168, 00:13:45.105 "enable_numa": false 00:13:45.105 } 00:13:45.105 } 00:13:45.105 ] 00:13:45.105 }, 00:13:45.105 { 00:13:45.105 "subsystem": "sock", 00:13:45.105 "config": [ 00:13:45.105 { 00:13:45.105 "method": "sock_set_default_impl", 00:13:45.105 "params": { 00:13:45.105 "impl_name": "uring" 00:13:45.105 } 00:13:45.105 }, 00:13:45.105 { 00:13:45.105 "method": "sock_impl_set_options", 00:13:45.105 "params": { 00:13:45.105 "impl_name": "ssl", 00:13:45.105 "recv_buf_size": 4096, 00:13:45.105 "send_buf_size": 4096, 00:13:45.105 "enable_recv_pipe": true, 00:13:45.105 "enable_quickack": false, 00:13:45.105 "enable_placement_id": 0, 00:13:45.105 "enable_zerocopy_send_server": true, 00:13:45.105 "enable_zerocopy_send_client": false, 00:13:45.105 "zerocopy_threshold": 0, 00:13:45.105 "tls_version": 0, 00:13:45.105 "enable_ktls": false 00:13:45.105 } 00:13:45.105 }, 00:13:45.105 { 00:13:45.105 "method": "sock_impl_set_options", 00:13:45.105 "params": { 00:13:45.105 "impl_name": "posix", 00:13:45.105 "recv_buf_size": 2097152, 00:13:45.105 "send_buf_size": 2097152, 00:13:45.105 "enable_recv_pipe": true, 00:13:45.105 "enable_quickack": false, 00:13:45.105 "enable_placement_id": 0, 00:13:45.105 "enable_zerocopy_send_server": true, 00:13:45.105 "enable_zerocopy_send_client": false, 00:13:45.105 "zerocopy_threshold": 0, 00:13:45.105 "tls_version": 0, 00:13:45.105 "enable_ktls": false 00:13:45.105 } 00:13:45.105 }, 00:13:45.105 { 00:13:45.105 "method": "sock_impl_set_options", 00:13:45.105 "params": { 00:13:45.105 "impl_name": "uring", 00:13:45.105 "recv_buf_size": 2097152, 00:13:45.105 "send_buf_size": 2097152, 00:13:45.105 "enable_recv_pipe": true, 00:13:45.105 "enable_quickack": false, 00:13:45.105 "enable_placement_id": 0, 00:13:45.105 "enable_zerocopy_send_server": false, 00:13:45.105 "enable_zerocopy_send_client": false, 00:13:45.105 "zerocopy_threshold": 0, 00:13:45.105 "tls_version": 0, 00:13:45.105 "enable_ktls": false 00:13:45.105 } 00:13:45.105 } 00:13:45.105 ] 00:13:45.105 }, 00:13:45.105 { 00:13:45.105 "subsystem": "vmd", 00:13:45.105 "config": [] 00:13:45.105 }, 00:13:45.105 { 00:13:45.105 "subsystem": "accel", 00:13:45.105 "config": [ 00:13:45.105 { 00:13:45.105 "method": "accel_set_options", 00:13:45.105 "params": { 00:13:45.105 "small_cache_size": 128, 00:13:45.105 "large_cache_size": 16, 00:13:45.105 "task_count": 2048, 00:13:45.105 "sequence_count": 2048, 00:13:45.105 "buf_count": 2048 00:13:45.105 } 00:13:45.105 } 00:13:45.105 ] 00:13:45.105 }, 00:13:45.105 { 00:13:45.105 "subsystem": "bdev", 00:13:45.105 "config": [ 00:13:45.105 { 00:13:45.105 "method": "bdev_set_options", 00:13:45.105 "params": { 00:13:45.105 "bdev_io_pool_size": 65535, 00:13:45.105 "bdev_io_cache_size": 256, 00:13:45.105 "bdev_auto_examine": true, 00:13:45.105 "iobuf_small_cache_size": 128, 00:13:45.105 "iobuf_large_cache_size": 16 00:13:45.105 } 00:13:45.105 }, 00:13:45.105 { 00:13:45.105 "method": "bdev_raid_set_options", 00:13:45.105 "params": { 00:13:45.105 "process_window_size_kb": 1024, 00:13:45.105 "process_max_bandwidth_mb_sec": 0 00:13:45.105 } 00:13:45.105 }, 00:13:45.105 { 00:13:45.106 "method": "bdev_iscsi_set_options", 00:13:45.106 "params": { 00:13:45.106 "timeout_sec": 30 00:13:45.106 } 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "method": "bdev_nvme_set_options", 00:13:45.106 "params": { 00:13:45.106 "action_on_timeout": "none", 00:13:45.106 "timeout_us": 0, 00:13:45.106 "timeout_admin_us": 0, 00:13:45.106 "keep_alive_timeout_ms": 10000, 00:13:45.106 "arbitration_burst": 0, 00:13:45.106 "low_priority_weight": 0, 00:13:45.106 "medium_priority_weight": 0, 00:13:45.106 "high_priority_weight": 0, 00:13:45.106 "nvme_adminq_poll_period_us": 10000, 00:13:45.106 "nvme_ioq_poll_period_us": 0, 00:13:45.106 "io_queue_requests": 0, 00:13:45.106 "delay_cmd_submit": true, 00:13:45.106 "transport_retry_count": 4, 00:13:45.106 "bdev_retry_count": 3, 00:13:45.106 "transport_ack_timeout": 0, 00:13:45.106 "ctrlr_loss_timeout_sec": 0, 00:13:45.106 "reconnect_delay_sec": 0, 00:13:45.106 "fast_io_fail_timeout_sec": 0, 00:13:45.106 "disable_auto_failback": false, 00:13:45.106 "generate_uuids": false, 00:13:45.106 "transport_tos": 0, 00:13:45.106 "nvme_error_stat": false, 00:13:45.106 "rdma_srq_size": 0, 00:13:45.106 "io_path_stat": false, 00:13:45.106 "allow_accel_sequence": false, 00:13:45.106 "rdma_max_cq_size": 0, 00:13:45.106 "rdma_cm_event_timeout_ms": 0, 00:13:45.106 "dhchap_digests": [ 00:13:45.106 "sha256", 00:13:45.106 "sha384", 00:13:45.106 "sha512" 00:13:45.106 ], 00:13:45.106 "dhchap_dhgroups": [ 00:13:45.106 "null", 00:13:45.106 "ffdhe2048", 00:13:45.106 "ffdhe3072", 00:13:45.106 "ffdhe4096", 00:13:45.106 "ffdhe6144", 00:13:45.106 "ffdhe8192" 00:13:45.106 ] 00:13:45.106 } 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "method": "bdev_nvme_set_hotplug", 00:13:45.106 "params": { 00:13:45.106 "period_us": 100000, 00:13:45.106 "enable": false 00:13:45.106 } 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "method": "bdev_malloc_create", 00:13:45.106 "params": { 00:13:45.106 "name": "malloc0", 00:13:45.106 "num_blocks": 8192, 00:13:45.106 "block_size": 4096, 00:13:45.106 "physical_block_size": 4096, 00:13:45.106 "uuid": "e73a19a6-7ba6-4e6f-9469-e7e3f7091608", 00:13:45.106 "optimal_io_boundary": 0, 00:13:45.106 "md_size": 0, 00:13:45.106 "dif_type": 0, 00:13:45.106 "dif_is_head_of_md": false, 00:13:45.106 "dif_pi_format": 0 00:13:45.106 } 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "method": "bdev_wait_for_examine" 00:13:45.106 } 00:13:45.106 ] 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "subsystem": "nbd", 00:13:45.106 "config": [] 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "subsystem": "scheduler", 00:13:45.106 "config": [ 00:13:45.106 { 00:13:45.106 "method": "framework_set_scheduler", 00:13:45.106 "params": { 00:13:45.106 "name": "static" 00:13:45.106 } 00:13:45.106 } 00:13:45.106 ] 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "subsystem": "nvmf", 00:13:45.106 "config": [ 00:13:45.106 { 00:13:45.106 "method": "nvmf_set_config", 00:13:45.106 "params": { 00:13:45.106 "discovery_filter": "match_any", 00:13:45.106 "admin_cmd_passthru": { 00:13:45.106 "identify_ctrlr": false 00:13:45.106 }, 00:13:45.106 "dhchap_digests": [ 00:13:45.106 "sha256", 00:13:45.106 "sha384", 00:13:45.106 "sha512" 00:13:45.106 ], 00:13:45.106 "dhchap_dhgroups": [ 00:13:45.106 "null", 00:13:45.106 "ffdhe2048", 00:13:45.106 "ffdhe3072", 00:13:45.106 "ffdhe4096", 00:13:45.106 "ffdhe6144", 00:13:45.106 "ffdhe8192" 00:13:45.106 ] 00:13:45.106 } 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "method": "nvmf_set_max_subsystems", 00:13:45.106 "params": { 00:13:45.106 "max_subsystems": 1024 00:13:45.106 } 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "method": "nvmf_set_crdt", 00:13:45.106 "params": { 00:13:45.106 "crdt1": 0, 00:13:45.106 "crdt2": 0, 00:13:45.106 "crdt3": 0 00:13:45.106 } 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "method": "nvmf_create_transport", 00:13:45.106 "params": { 00:13:45.106 "trtype": "TCP", 00:13:45.106 "max_queue_depth": 128, 00:13:45.106 "max_io_qpairs_per_ctrlr": 127, 00:13:45.106 "in_capsule_data_size": 4096, 00:13:45.106 "max_io_size": 131072, 00:13:45.106 "io_unit_size": 131072, 00:13:45.106 "max_aq_depth": 128, 00:13:45.106 "num_shared_buffers": 511, 00:13:45.106 "buf_cache_size": 4294967295, 00:13:45.106 "dif_insert_or_strip": false, 00:13:45.106 "zcopy": false, 00:13:45.106 "c2h_success": false, 00:13:45.106 "sock_priority": 0, 00:13:45.106 "abort_timeout_sec": 1, 00:13:45.106 "ack_timeout": 0, 00:13:45.106 "data_wr_pool_size": 0 00:13:45.106 } 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "method": "nvmf_create_subsystem", 00:13:45.106 "params": { 00:13:45.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.106 "allow_any_host": false, 00:13:45.106 "serial_number": "SPDK00000000000001", 00:13:45.106 "model_number": "SPDK bdev Controller", 00:13:45.106 "max_namespaces": 10, 00:13:45.106 "min_cntlid": 1, 00:13:45.106 "max_cntlid": 65519, 00:13:45.106 "ana_reporting": false 00:13:45.106 } 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "method": "nvmf_subsystem_add_host", 00:13:45.106 "params": { 00:13:45.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.106 "host": "nqn.2016-06.io.spdk:host1", 00:13:45.106 "psk": "key0" 00:13:45.106 } 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "method": "nvmf_subsystem_add_ns", 00:13:45.106 "params": { 00:13:45.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.106 "namespace": { 00:13:45.106 "nsid": 1, 00:13:45.106 "bdev_name": "malloc0", 00:13:45.106 "nguid": "E73A19A67BA64E6F9469E7E3F7091608", 00:13:45.106 "uuid": "e73a19a6-7ba6-4e6f-9469-e7e3f7091608", 00:13:45.106 "no_auto_visible": false 00:13:45.106 } 00:13:45.106 } 00:13:45.106 }, 00:13:45.106 { 00:13:45.106 "method": "nvmf_subsystem_add_listener", 00:13:45.106 "params": { 00:13:45.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.106 "listen_address": { 00:13:45.106 "trtype": "TCP", 00:13:45.106 "adrfam": "IPv4", 00:13:45.106 "traddr": "10.0.0.3", 00:13:45.107 "trsvcid": "4420" 00:13:45.107 }, 00:13:45.107 "secure_channel": true 00:13:45.107 } 00:13:45.107 } 00:13:45.107 ] 00:13:45.107 } 00:13:45.107 ] 00:13:45.107 }' 00:13:45.107 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71667 00:13:45.107 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:45.107 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71667 00:13:45.107 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71667 ']' 00:13:45.107 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.107 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:45.107 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.107 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:45.107 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.107 [2024-11-12 10:35:33.715812] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:45.107 [2024-11-12 10:35:33.716163] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.367 [2024-11-12 10:35:33.864977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.367 [2024-11-12 10:35:33.898842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.367 [2024-11-12 10:35:33.898884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.367 [2024-11-12 10:35:33.898910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.367 [2024-11-12 10:35:33.898933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.367 [2024-11-12 10:35:33.898939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.367 [2024-11-12 10:35:33.899299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.367 [2024-11-12 10:35:34.038103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:45.367 [2024-11-12 10:35:34.093663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.626 [2024-11-12 10:35:34.125609] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:45.626 [2024-11-12 10:35:34.125872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71699 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71699 /var/tmp/bdevperf.sock 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71699 ']' 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:46.195 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:13:46.195 "subsystems": [ 00:13:46.195 { 00:13:46.195 "subsystem": "keyring", 00:13:46.195 "config": [ 00:13:46.195 { 00:13:46.195 "method": "keyring_file_add_key", 00:13:46.195 "params": { 00:13:46.195 "name": "key0", 00:13:46.195 "path": "/tmp/tmp.SCyMkIOFpD" 00:13:46.195 } 00:13:46.195 } 00:13:46.195 ] 00:13:46.195 }, 00:13:46.195 { 00:13:46.195 "subsystem": "iobuf", 00:13:46.195 "config": [ 00:13:46.195 { 00:13:46.195 "method": "iobuf_set_options", 00:13:46.195 "params": { 00:13:46.195 "small_pool_count": 8192, 00:13:46.195 "large_pool_count": 1024, 00:13:46.195 "small_bufsize": 8192, 00:13:46.195 "large_bufsize": 135168, 00:13:46.195 "enable_numa": false 00:13:46.195 } 00:13:46.195 } 00:13:46.195 ] 00:13:46.195 }, 00:13:46.195 { 00:13:46.195 "subsystem": "sock", 00:13:46.195 "config": [ 00:13:46.195 { 00:13:46.195 "method": "sock_set_default_impl", 00:13:46.195 "params": { 00:13:46.195 "impl_name": "uring" 00:13:46.195 } 00:13:46.195 }, 00:13:46.195 { 00:13:46.195 "method": "sock_impl_set_options", 00:13:46.195 "params": { 00:13:46.195 "impl_name": "ssl", 00:13:46.195 "recv_buf_size": 4096, 00:13:46.195 "send_buf_size": 4096, 00:13:46.195 "enable_recv_pipe": true, 00:13:46.195 "enable_quickack": false, 00:13:46.195 "enable_placement_id": 0, 00:13:46.195 "enable_zerocopy_send_server": true, 00:13:46.195 "enable_zerocopy_send_client": false, 00:13:46.195 "zerocopy_threshold": 0, 00:13:46.195 "tls_version": 0, 00:13:46.195 "enable_ktls": false 00:13:46.195 } 00:13:46.195 }, 00:13:46.195 { 00:13:46.195 "method": "sock_impl_set_options", 00:13:46.195 "params": { 00:13:46.195 "impl_name": "posix", 00:13:46.195 "recv_buf_size": 2097152, 00:13:46.195 "send_buf_size": 2097152, 00:13:46.196 "enable_recv_pipe": true, 00:13:46.196 "enable_quickack": false, 00:13:46.196 "enable_placement_id": 0, 00:13:46.196 "enable_zerocopy_send_server": true, 00:13:46.196 "enable_zerocopy_send_client": false, 00:13:46.196 "zerocopy_threshold": 0, 00:13:46.196 "tls_version": 0, 00:13:46.196 "enable_ktls": false 00:13:46.196 } 00:13:46.196 }, 00:13:46.196 { 00:13:46.196 "method": "sock_impl_set_options", 00:13:46.196 "params": { 00:13:46.196 "impl_name": "uring", 00:13:46.196 "recv_buf_size": 2097152, 00:13:46.196 "send_buf_size": 2097152, 00:13:46.196 "enable_recv_pipe": true, 00:13:46.196 "enable_quickack": false, 00:13:46.196 "enable_placement_id": 0, 00:13:46.196 "enable_zerocopy_send_server": false, 00:13:46.196 "enable_zerocopy_send_client": false, 00:13:46.196 "zerocopy_threshold": 0, 00:13:46.196 "tls_version": 0, 00:13:46.196 "enable_ktls": false 00:13:46.196 } 00:13:46.196 } 00:13:46.196 ] 00:13:46.196 }, 00:13:46.196 { 00:13:46.196 "subsystem": "vmd", 00:13:46.196 "config": [] 00:13:46.196 }, 00:13:46.196 { 00:13:46.196 "subsystem": "accel", 00:13:46.196 "config": [ 00:13:46.196 { 00:13:46.196 "method": "accel_set_options", 00:13:46.196 "params": { 00:13:46.196 "small_cache_size": 128, 00:13:46.196 "large_cache_size": 16, 00:13:46.196 "task_count": 2048, 00:13:46.196 "sequence_count": 2048, 00:13:46.196 "buf_count": 2048 00:13:46.196 } 00:13:46.196 } 00:13:46.196 ] 00:13:46.196 }, 00:13:46.196 { 00:13:46.196 "subsystem": "bdev", 00:13:46.196 "config": [ 00:13:46.196 { 00:13:46.196 "method": "bdev_set_options", 00:13:46.196 "params": { 00:13:46.196 "bdev_io_pool_size": 65535, 00:13:46.196 "bdev_io_cache_size": 256, 00:13:46.196 "bdev_auto_examine": true, 00:13:46.196 "iobuf_small_cache_size": 128, 00:13:46.196 "iobuf_large_cache_size": 16 00:13:46.196 } 00:13:46.196 }, 00:13:46.196 { 00:13:46.196 "method": "bdev_raid_set_options", 00:13:46.196 "params": { 00:13:46.196 "process_window_size_kb": 1024, 00:13:46.196 "process_max_bandwidth_mb_sec": 0 00:13:46.196 } 00:13:46.196 }, 00:13:46.196 { 00:13:46.196 "method": "bdev_iscsi_set_options", 00:13:46.196 "params": { 00:13:46.196 "timeout_sec": 30 00:13:46.196 } 00:13:46.196 }, 00:13:46.196 { 00:13:46.196 "method": "bdev_nvme_set_options", 00:13:46.196 "params": { 00:13:46.196 "action_on_timeout": "none", 00:13:46.196 "timeout_us": 0, 00:13:46.196 "timeout_admin_us": 0, 00:13:46.196 "keep_alive_timeout_ms": 10000, 00:13:46.196 "arbitration_burst": 0, 00:13:46.196 "low_priority_weight": 0, 00:13:46.196 "medium_priority_weight": 0, 00:13:46.196 "high_priority_weight": 0, 00:13:46.196 "nvme_adminq_poll_period_us": 10000, 00:13:46.196 "nvme_ioq_poll_period_us": 0, 00:13:46.196 "io_queue_requests": 512, 00:13:46.196 "delay_cmd_submit": true, 00:13:46.196 "transport_retry_count": 4, 00:13:46.196 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:46.196 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.196 "bdev_retry_count": 3, 00:13:46.196 "transport_ack_timeout": 0, 00:13:46.196 "ctrlr_loss_timeout_sec": 0, 00:13:46.196 "reconnect_delay_sec": 0, 00:13:46.196 "fast_io_fail_timeout_sec": 0, 00:13:46.196 "disable_auto_failback": false, 00:13:46.196 "generate_uuids": false, 00:13:46.196 "transport_tos": 0, 00:13:46.196 "nvme_error_stat": false, 00:13:46.196 "rdma_srq_size": 0, 00:13:46.196 "io_path_stat": false, 00:13:46.196 "allow_accel_sequence": false, 00:13:46.196 "rdma_max_cq_size": 0, 00:13:46.196 "rdma_cm_event_timeout_ms": 0, 00:13:46.196 "dhchap_digests": [ 00:13:46.196 "sha256", 00:13:46.196 "sha384", 00:13:46.196 "sha512" 00:13:46.196 ], 00:13:46.196 "dhchap_dhgroups": [ 00:13:46.196 "null", 00:13:46.196 "ffdhe2048", 00:13:46.196 "ffdhe3072", 00:13:46.196 "ffdhe4096", 00:13:46.196 "ffdhe6144", 00:13:46.196 "ffdhe8192" 00:13:46.196 ] 00:13:46.196 } 00:13:46.196 }, 00:13:46.196 { 00:13:46.196 "method": "bdev_nvme_attach_controller", 00:13:46.196 "params": { 00:13:46.196 "name": "TLSTEST", 00:13:46.196 "trtype": "TCP", 00:13:46.196 "adrfam": "IPv4", 00:13:46.196 "traddr": "10.0.0.3", 00:13:46.196 "trsvcid": "4420", 00:13:46.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.196 "prchk_reftag": false, 00:13:46.196 "prchk_guard": false, 00:13:46.196 "ctrlr_loss_timeout_sec": 0, 00:13:46.196 "reconnect_delay_sec": 0, 00:13:46.196 "fast_io_fail_timeout_sec": 0, 00:13:46.196 "psk": "key0", 00:13:46.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.196 "hdgst": false, 00:13:46.196 "ddgst": false, 00:13:46.196 "multipath": "multipath" 00:13:46.196 } 00:13:46.196 }, 00:13:46.196 { 00:13:46.196 "method": "bdev_nvme_set_hotplug", 00:13:46.196 "params": { 00:13:46.196 "period_us": 100000, 00:13:46.196 "enable": false 00:13:46.196 } 00:13:46.196 }, 00:13:46.196 { 00:13:46.196 "method": "bdev_wait_for_examine" 00:13:46.196 } 00:13:46.196 ] 00:13:46.196 }, 00:13:46.196 { 00:13:46.196 "subsystem": "nbd", 00:13:46.196 "config": [] 00:13:46.196 } 00:13:46.196 ] 00:13:46.196 }' 00:13:46.196 [2024-11-12 10:35:34.773877] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:46.196 [2024-11-12 10:35:34.774330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71699 ] 00:13:46.196 [2024-11-12 10:35:34.927285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.459 [2024-11-12 10:35:34.967281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.459 [2024-11-12 10:35:35.083563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:46.459 [2024-11-12 10:35:35.119041] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:47.078 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:47.078 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:47.078 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:47.078 Running I/O for 10 seconds... 00:13:49.394 4504.00 IOPS, 17.59 MiB/s [2024-11-12T10:35:39.092Z] 4567.00 IOPS, 17.84 MiB/s [2024-11-12T10:35:40.030Z] 4585.00 IOPS, 17.91 MiB/s [2024-11-12T10:35:40.966Z] 4602.75 IOPS, 17.98 MiB/s [2024-11-12T10:35:41.905Z] 4605.60 IOPS, 17.99 MiB/s [2024-11-12T10:35:42.845Z] 4612.00 IOPS, 18.02 MiB/s [2024-11-12T10:35:44.224Z] 4615.14 IOPS, 18.03 MiB/s [2024-11-12T10:35:44.793Z] 4614.38 IOPS, 18.02 MiB/s [2024-11-12T10:35:46.172Z] 4613.00 IOPS, 18.02 MiB/s [2024-11-12T10:35:46.172Z] 4598.30 IOPS, 17.96 MiB/s 00:13:57.414 Latency(us) 00:13:57.414 [2024-11-12T10:35:46.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.414 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:57.414 Verification LBA range: start 0x0 length 0x2000 00:13:57.414 TLSTESTn1 : 10.02 4602.17 17.98 0.00 0.00 27760.82 6911.07 22639.71 00:13:57.414 [2024-11-12T10:35:46.172Z] =================================================================================================================== 00:13:57.414 [2024-11-12T10:35:46.172Z] Total : 4602.17 17.98 0.00 0.00 27760.82 6911.07 22639.71 00:13:57.414 { 00:13:57.414 "results": [ 00:13:57.414 { 00:13:57.414 "job": "TLSTESTn1", 00:13:57.414 "core_mask": "0x4", 00:13:57.414 "workload": "verify", 00:13:57.414 "status": "finished", 00:13:57.414 "verify_range": { 00:13:57.414 "start": 0, 00:13:57.414 "length": 8192 00:13:57.414 }, 00:13:57.414 "queue_depth": 128, 00:13:57.414 "io_size": 4096, 00:13:57.414 "runtime": 10.018758, 00:13:57.414 "iops": 4602.167254663702, 00:13:57.414 "mibps": 17.977215838530086, 00:13:57.414 "io_failed": 0, 00:13:57.414 "io_timeout": 0, 00:13:57.414 "avg_latency_us": 27760.815812361492, 00:13:57.414 "min_latency_us": 6911.069090909091, 00:13:57.414 "max_latency_us": 22639.70909090909 00:13:57.414 } 00:13:57.414 ], 00:13:57.414 "core_count": 1 00:13:57.414 } 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71699 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71699 ']' 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71699 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71699 00:13:57.414 killing process with pid 71699 00:13:57.414 Received shutdown signal, test time was about 10.000000 seconds 00:13:57.414 00:13:57.414 Latency(us) 00:13:57.414 [2024-11-12T10:35:46.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.414 [2024-11-12T10:35:46.172Z] =================================================================================================================== 00:13:57.414 [2024-11-12T10:35:46.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71699' 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71699 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71699 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71667 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71667 ']' 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71667 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:57.414 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71667 00:13:57.414 killing process with pid 71667 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71667' 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71667 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71667 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71832 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71832 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71832 ']' 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:57.414 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.674 [2024-11-12 10:35:46.200409] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:57.674 [2024-11-12 10:35:46.200505] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.674 [2024-11-12 10:35:46.350691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.674 [2024-11-12 10:35:46.392800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.674 [2024-11-12 10:35:46.392886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.674 [2024-11-12 10:35:46.392912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.674 [2024-11-12 10:35:46.392922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.674 [2024-11-12 10:35:46.392931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.674 [2024-11-12 10:35:46.393405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.674 [2024-11-12 10:35:46.428612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:57.933 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:57.933 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:57.933 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.933 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:57.933 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.933 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.933 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.SCyMkIOFpD 00:13:57.933 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SCyMkIOFpD 00:13:57.933 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:58.192 [2024-11-12 10:35:46.727428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.192 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:58.451 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:58.710 [2024-11-12 10:35:47.247479] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:58.710 [2024-11-12 10:35:47.247949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:58.710 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:58.969 malloc0 00:13:58.969 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:59.228 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SCyMkIOFpD 00:13:59.487 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:59.746 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:59.746 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=71880 00:13:59.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:59.746 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:59.746 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 71880 /var/tmp/bdevperf.sock 00:13:59.746 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71880 ']' 00:13:59.746 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:59.746 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:59.746 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:59.746 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:59.746 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.746 [2024-11-12 10:35:48.292582] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:13:59.746 [2024-11-12 10:35:48.292838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71880 ] 00:13:59.746 [2024-11-12 10:35:48.432429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.746 [2024-11-12 10:35:48.461297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.746 [2024-11-12 10:35:48.488798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:00.005 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:00.005 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:00.005 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SCyMkIOFpD 00:14:00.263 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:00.263 [2024-11-12 10:35:48.975074] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:00.522 nvme0n1 00:14:00.523 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:00.523 Running I/O for 1 seconds... 00:14:01.715 4352.00 IOPS, 17.00 MiB/s 00:14:01.715 Latency(us) 00:14:01.715 [2024-11-12T10:35:50.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.715 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:01.715 Verification LBA range: start 0x0 length 0x2000 00:14:01.715 nvme0n1 : 1.03 4352.14 17.00 0.00 0.00 29100.05 6196.13 18469.24 00:14:01.715 [2024-11-12T10:35:50.473Z] =================================================================================================================== 00:14:01.715 [2024-11-12T10:35:50.473Z] Total : 4352.14 17.00 0.00 0.00 29100.05 6196.13 18469.24 00:14:01.715 { 00:14:01.715 "results": [ 00:14:01.715 { 00:14:01.715 "job": "nvme0n1", 00:14:01.715 "core_mask": "0x2", 00:14:01.715 "workload": "verify", 00:14:01.715 "status": "finished", 00:14:01.715 "verify_range": { 00:14:01.715 "start": 0, 00:14:01.715 "length": 8192 00:14:01.715 }, 00:14:01.715 "queue_depth": 128, 00:14:01.715 "io_size": 4096, 00:14:01.716 "runtime": 1.029378, 00:14:01.716 "iops": 4352.142750282209, 00:14:01.716 "mibps": 17.00055761828988, 00:14:01.716 "io_failed": 0, 00:14:01.716 "io_timeout": 0, 00:14:01.716 "avg_latency_us": 29100.048623376624, 00:14:01.716 "min_latency_us": 6196.130909090909, 00:14:01.716 "max_latency_us": 18469.236363636363 00:14:01.716 } 00:14:01.716 ], 00:14:01.716 "core_count": 1 00:14:01.716 } 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 71880 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71880 ']' 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71880 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71880 00:14:01.716 killing process with pid 71880 00:14:01.716 Received shutdown signal, test time was about 1.000000 seconds 00:14:01.716 00:14:01.716 Latency(us) 00:14:01.716 [2024-11-12T10:35:50.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.716 [2024-11-12T10:35:50.474Z] =================================================================================================================== 00:14:01.716 [2024-11-12T10:35:50.474Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71880' 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71880 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71880 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 71832 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71832 ']' 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71832 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71832 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71832' 00:14:01.716 killing process with pid 71832 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71832 00:14:01.716 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71832 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71925 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71925 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71925 ']' 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:01.975 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.975 [2024-11-12 10:35:50.631181] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:01.975 [2024-11-12 10:35:50.631280] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.235 [2024-11-12 10:35:50.769776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.235 [2024-11-12 10:35:50.797167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.235 [2024-11-12 10:35:50.797247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.235 [2024-11-12 10:35:50.797274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.235 [2024-11-12 10:35:50.797282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.235 [2024-11-12 10:35:50.797288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.235 [2024-11-12 10:35:50.797570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.235 [2024-11-12 10:35:50.825705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.235 [2024-11-12 10:35:50.919684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.235 malloc0 00:14:02.235 [2024-11-12 10:35:50.946052] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:02.235 [2024-11-12 10:35:50.946299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:02.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=71948 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 71948 /var/tmp/bdevperf.sock 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71948 ']' 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:02.235 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.493 [2024-11-12 10:35:51.023315] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:02.493 [2024-11-12 10:35:51.023736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71948 ] 00:14:02.493 [2024-11-12 10:35:51.163938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.494 [2024-11-12 10:35:51.192838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.494 [2024-11-12 10:35:51.220622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:02.752 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:02.752 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:02.752 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SCyMkIOFpD 00:14:03.011 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:03.269 [2024-11-12 10:35:51.858093] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:03.269 nvme0n1 00:14:03.270 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:03.528 Running I/O for 1 seconds... 00:14:04.488 4224.00 IOPS, 16.50 MiB/s 00:14:04.488 Latency(us) 00:14:04.488 [2024-11-12T10:35:53.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.488 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:04.488 Verification LBA range: start 0x0 length 0x2000 00:14:04.488 nvme0n1 : 1.03 4234.45 16.54 0.00 0.00 29935.87 8936.73 21448.15 00:14:04.488 [2024-11-12T10:35:53.246Z] =================================================================================================================== 00:14:04.488 [2024-11-12T10:35:53.246Z] Total : 4234.45 16.54 0.00 0.00 29935.87 8936.73 21448.15 00:14:04.488 { 00:14:04.488 "results": [ 00:14:04.488 { 00:14:04.488 "job": "nvme0n1", 00:14:04.488 "core_mask": "0x2", 00:14:04.488 "workload": "verify", 00:14:04.488 "status": "finished", 00:14:04.488 "verify_range": { 00:14:04.488 "start": 0, 00:14:04.488 "length": 8192 00:14:04.488 }, 00:14:04.488 "queue_depth": 128, 00:14:04.488 "io_size": 4096, 00:14:04.488 "runtime": 1.02776, 00:14:04.488 "iops": 4234.451622946992, 00:14:04.488 "mibps": 16.540826652136687, 00:14:04.488 "io_failed": 0, 00:14:04.488 "io_timeout": 0, 00:14:04.488 "avg_latency_us": 29935.873368983957, 00:14:04.488 "min_latency_us": 8936.727272727272, 00:14:04.488 "max_latency_us": 21448.145454545454 00:14:04.488 } 00:14:04.488 ], 00:14:04.488 "core_count": 1 00:14:04.488 } 00:14:04.488 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:04.488 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.488 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.748 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.748 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:04.748 "subsystems": [ 00:14:04.748 { 00:14:04.748 "subsystem": "keyring", 00:14:04.748 "config": [ 00:14:04.748 { 00:14:04.748 "method": "keyring_file_add_key", 00:14:04.748 "params": { 00:14:04.748 "name": "key0", 00:14:04.748 "path": "/tmp/tmp.SCyMkIOFpD" 00:14:04.748 } 00:14:04.748 } 00:14:04.748 ] 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "subsystem": "iobuf", 00:14:04.748 "config": [ 00:14:04.748 { 00:14:04.748 "method": "iobuf_set_options", 00:14:04.748 "params": { 00:14:04.748 "small_pool_count": 8192, 00:14:04.748 "large_pool_count": 1024, 00:14:04.748 "small_bufsize": 8192, 00:14:04.748 "large_bufsize": 135168, 00:14:04.748 "enable_numa": false 00:14:04.748 } 00:14:04.748 } 00:14:04.748 ] 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "subsystem": "sock", 00:14:04.748 "config": [ 00:14:04.748 { 00:14:04.748 "method": "sock_set_default_impl", 00:14:04.748 "params": { 00:14:04.748 "impl_name": "uring" 00:14:04.748 } 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "method": "sock_impl_set_options", 00:14:04.748 "params": { 00:14:04.748 "impl_name": "ssl", 00:14:04.748 "recv_buf_size": 4096, 00:14:04.748 "send_buf_size": 4096, 00:14:04.748 "enable_recv_pipe": true, 00:14:04.748 "enable_quickack": false, 00:14:04.748 "enable_placement_id": 0, 00:14:04.748 "enable_zerocopy_send_server": true, 00:14:04.748 "enable_zerocopy_send_client": false, 00:14:04.748 "zerocopy_threshold": 0, 00:14:04.748 "tls_version": 0, 00:14:04.748 "enable_ktls": false 00:14:04.748 } 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "method": "sock_impl_set_options", 00:14:04.748 "params": { 00:14:04.748 "impl_name": "posix", 00:14:04.748 "recv_buf_size": 2097152, 00:14:04.748 "send_buf_size": 2097152, 00:14:04.748 "enable_recv_pipe": true, 00:14:04.748 "enable_quickack": false, 00:14:04.748 "enable_placement_id": 0, 00:14:04.748 "enable_zerocopy_send_server": true, 00:14:04.748 "enable_zerocopy_send_client": false, 00:14:04.748 "zerocopy_threshold": 0, 00:14:04.748 "tls_version": 0, 00:14:04.748 "enable_ktls": false 00:14:04.748 } 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "method": "sock_impl_set_options", 00:14:04.748 "params": { 00:14:04.748 "impl_name": "uring", 00:14:04.748 "recv_buf_size": 2097152, 00:14:04.748 "send_buf_size": 2097152, 00:14:04.748 "enable_recv_pipe": true, 00:14:04.748 "enable_quickack": false, 00:14:04.748 "enable_placement_id": 0, 00:14:04.748 "enable_zerocopy_send_server": false, 00:14:04.748 "enable_zerocopy_send_client": false, 00:14:04.748 "zerocopy_threshold": 0, 00:14:04.748 "tls_version": 0, 00:14:04.748 "enable_ktls": false 00:14:04.748 } 00:14:04.748 } 00:14:04.748 ] 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "subsystem": "vmd", 00:14:04.748 "config": [] 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "subsystem": "accel", 00:14:04.748 "config": [ 00:14:04.748 { 00:14:04.748 "method": "accel_set_options", 00:14:04.748 "params": { 00:14:04.748 "small_cache_size": 128, 00:14:04.748 "large_cache_size": 16, 00:14:04.748 "task_count": 2048, 00:14:04.748 "sequence_count": 2048, 00:14:04.748 "buf_count": 2048 00:14:04.748 } 00:14:04.748 } 00:14:04.748 ] 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "subsystem": "bdev", 00:14:04.748 "config": [ 00:14:04.748 { 00:14:04.748 "method": "bdev_set_options", 00:14:04.748 "params": { 00:14:04.748 "bdev_io_pool_size": 65535, 00:14:04.748 "bdev_io_cache_size": 256, 00:14:04.748 "bdev_auto_examine": true, 00:14:04.748 "iobuf_small_cache_size": 128, 00:14:04.748 "iobuf_large_cache_size": 16 00:14:04.748 } 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "method": "bdev_raid_set_options", 00:14:04.748 "params": { 00:14:04.748 "process_window_size_kb": 1024, 00:14:04.748 "process_max_bandwidth_mb_sec": 0 00:14:04.748 } 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "method": "bdev_iscsi_set_options", 00:14:04.748 "params": { 00:14:04.748 "timeout_sec": 30 00:14:04.748 } 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "method": "bdev_nvme_set_options", 00:14:04.748 "params": { 00:14:04.748 "action_on_timeout": "none", 00:14:04.748 "timeout_us": 0, 00:14:04.748 "timeout_admin_us": 0, 00:14:04.748 "keep_alive_timeout_ms": 10000, 00:14:04.748 "arbitration_burst": 0, 00:14:04.748 "low_priority_weight": 0, 00:14:04.748 "medium_priority_weight": 0, 00:14:04.748 "high_priority_weight": 0, 00:14:04.748 "nvme_adminq_poll_period_us": 10000, 00:14:04.748 "nvme_ioq_poll_period_us": 0, 00:14:04.748 "io_queue_requests": 0, 00:14:04.748 "delay_cmd_submit": true, 00:14:04.748 "transport_retry_count": 4, 00:14:04.748 "bdev_retry_count": 3, 00:14:04.748 "transport_ack_timeout": 0, 00:14:04.748 "ctrlr_loss_timeout_sec": 0, 00:14:04.748 "reconnect_delay_sec": 0, 00:14:04.748 "fast_io_fail_timeout_sec": 0, 00:14:04.748 "disable_auto_failback": false, 00:14:04.748 "generate_uuids": false, 00:14:04.748 "transport_tos": 0, 00:14:04.748 "nvme_error_stat": false, 00:14:04.748 "rdma_srq_size": 0, 00:14:04.748 "io_path_stat": false, 00:14:04.748 "allow_accel_sequence": false, 00:14:04.748 "rdma_max_cq_size": 0, 00:14:04.748 "rdma_cm_event_timeout_ms": 0, 00:14:04.748 "dhchap_digests": [ 00:14:04.748 "sha256", 00:14:04.748 "sha384", 00:14:04.748 "sha512" 00:14:04.748 ], 00:14:04.748 "dhchap_dhgroups": [ 00:14:04.748 "null", 00:14:04.748 "ffdhe2048", 00:14:04.748 "ffdhe3072", 00:14:04.748 "ffdhe4096", 00:14:04.748 "ffdhe6144", 00:14:04.748 "ffdhe8192" 00:14:04.748 ] 00:14:04.748 } 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "method": "bdev_nvme_set_hotplug", 00:14:04.748 "params": { 00:14:04.748 "period_us": 100000, 00:14:04.748 "enable": false 00:14:04.748 } 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "method": "bdev_malloc_create", 00:14:04.748 "params": { 00:14:04.748 "name": "malloc0", 00:14:04.748 "num_blocks": 8192, 00:14:04.748 "block_size": 4096, 00:14:04.748 "physical_block_size": 4096, 00:14:04.748 "uuid": "141171cd-889e-44a4-a8bf-05b70c8ba3c5", 00:14:04.748 "optimal_io_boundary": 0, 00:14:04.748 "md_size": 0, 00:14:04.748 "dif_type": 0, 00:14:04.748 "dif_is_head_of_md": false, 00:14:04.748 "dif_pi_format": 0 00:14:04.748 } 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "method": "bdev_wait_for_examine" 00:14:04.748 } 00:14:04.748 ] 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "subsystem": "nbd", 00:14:04.748 "config": [] 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "subsystem": "scheduler", 00:14:04.748 "config": [ 00:14:04.748 { 00:14:04.748 "method": "framework_set_scheduler", 00:14:04.748 "params": { 00:14:04.748 "name": "static" 00:14:04.748 } 00:14:04.748 } 00:14:04.748 ] 00:14:04.748 }, 00:14:04.748 { 00:14:04.748 "subsystem": "nvmf", 00:14:04.748 "config": [ 00:14:04.748 { 00:14:04.748 "method": "nvmf_set_config", 00:14:04.748 "params": { 00:14:04.748 "discovery_filter": "match_any", 00:14:04.748 "admin_cmd_passthru": { 00:14:04.748 "identify_ctrlr": false 00:14:04.748 }, 00:14:04.748 "dhchap_digests": [ 00:14:04.748 "sha256", 00:14:04.748 "sha384", 00:14:04.748 "sha512" 00:14:04.748 ], 00:14:04.748 "dhchap_dhgroups": [ 00:14:04.749 "null", 00:14:04.749 "ffdhe2048", 00:14:04.749 "ffdhe3072", 00:14:04.749 "ffdhe4096", 00:14:04.749 "ffdhe6144", 00:14:04.749 "ffdhe8192" 00:14:04.749 ] 00:14:04.749 } 00:14:04.749 }, 00:14:04.749 { 00:14:04.749 "method": "nvmf_set_max_subsystems", 00:14:04.749 "params": { 00:14:04.749 "max_subsystems": 1024 00:14:04.749 } 00:14:04.749 }, 00:14:04.749 { 00:14:04.749 "method": "nvmf_set_crdt", 00:14:04.749 "params": { 00:14:04.749 "crdt1": 0, 00:14:04.749 "crdt2": 0, 00:14:04.749 "crdt3": 0 00:14:04.749 } 00:14:04.749 }, 00:14:04.749 { 00:14:04.749 "method": "nvmf_create_transport", 00:14:04.749 "params": { 00:14:04.749 "trtype": "TCP", 00:14:04.749 "max_queue_depth": 128, 00:14:04.749 "max_io_qpairs_per_ctrlr": 127, 00:14:04.749 "in_capsule_data_size": 4096, 00:14:04.749 "max_io_size": 131072, 00:14:04.749 "io_unit_size": 131072, 00:14:04.749 "max_aq_depth": 128, 00:14:04.749 "num_shared_buffers": 511, 00:14:04.749 "buf_cache_size": 4294967295, 00:14:04.749 "dif_insert_or_strip": false, 00:14:04.749 "zcopy": false, 00:14:04.749 "c2h_success": false, 00:14:04.749 "sock_priority": 0, 00:14:04.749 "abort_timeout_sec": 1, 00:14:04.749 "ack_timeout": 0, 00:14:04.749 "data_wr_pool_size": 0 00:14:04.749 } 00:14:04.749 }, 00:14:04.749 { 00:14:04.749 "method": "nvmf_create_subsystem", 00:14:04.749 "params": { 00:14:04.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.749 "allow_any_host": false, 00:14:04.749 "serial_number": "00000000000000000000", 00:14:04.749 "model_number": "SPDK bdev Controller", 00:14:04.749 "max_namespaces": 32, 00:14:04.749 "min_cntlid": 1, 00:14:04.749 "max_cntlid": 65519, 00:14:04.749 "ana_reporting": false 00:14:04.749 } 00:14:04.749 }, 00:14:04.749 { 00:14:04.749 "method": "nvmf_subsystem_add_host", 00:14:04.749 "params": { 00:14:04.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.749 "host": "nqn.2016-06.io.spdk:host1", 00:14:04.749 "psk": "key0" 00:14:04.749 } 00:14:04.749 }, 00:14:04.749 { 00:14:04.749 "method": "nvmf_subsystem_add_ns", 00:14:04.749 "params": { 00:14:04.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.749 "namespace": { 00:14:04.749 "nsid": 1, 00:14:04.749 "bdev_name": "malloc0", 00:14:04.749 "nguid": "141171CD889E44A4A8BF05B70C8BA3C5", 00:14:04.749 "uuid": "141171cd-889e-44a4-a8bf-05b70c8ba3c5", 00:14:04.749 "no_auto_visible": false 00:14:04.749 } 00:14:04.749 } 00:14:04.749 }, 00:14:04.749 { 00:14:04.749 "method": "nvmf_subsystem_add_listener", 00:14:04.749 "params": { 00:14:04.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.749 "listen_address": { 00:14:04.749 "trtype": "TCP", 00:14:04.749 "adrfam": "IPv4", 00:14:04.749 "traddr": "10.0.0.3", 00:14:04.749 "trsvcid": "4420" 00:14:04.749 }, 00:14:04.749 "secure_channel": false, 00:14:04.749 "sock_impl": "ssl" 00:14:04.749 } 00:14:04.749 } 00:14:04.749 ] 00:14:04.749 } 00:14:04.749 ] 00:14:04.749 }' 00:14:04.749 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:05.008 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:05.008 "subsystems": [ 00:14:05.008 { 00:14:05.008 "subsystem": "keyring", 00:14:05.008 "config": [ 00:14:05.008 { 00:14:05.008 "method": "keyring_file_add_key", 00:14:05.008 "params": { 00:14:05.008 "name": "key0", 00:14:05.008 "path": "/tmp/tmp.SCyMkIOFpD" 00:14:05.008 } 00:14:05.008 } 00:14:05.008 ] 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "subsystem": "iobuf", 00:14:05.008 "config": [ 00:14:05.008 { 00:14:05.008 "method": "iobuf_set_options", 00:14:05.008 "params": { 00:14:05.008 "small_pool_count": 8192, 00:14:05.008 "large_pool_count": 1024, 00:14:05.008 "small_bufsize": 8192, 00:14:05.008 "large_bufsize": 135168, 00:14:05.008 "enable_numa": false 00:14:05.008 } 00:14:05.008 } 00:14:05.008 ] 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "subsystem": "sock", 00:14:05.008 "config": [ 00:14:05.008 { 00:14:05.008 "method": "sock_set_default_impl", 00:14:05.008 "params": { 00:14:05.008 "impl_name": "uring" 00:14:05.008 } 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "method": "sock_impl_set_options", 00:14:05.008 "params": { 00:14:05.008 "impl_name": "ssl", 00:14:05.008 "recv_buf_size": 4096, 00:14:05.008 "send_buf_size": 4096, 00:14:05.008 "enable_recv_pipe": true, 00:14:05.008 "enable_quickack": false, 00:14:05.008 "enable_placement_id": 0, 00:14:05.008 "enable_zerocopy_send_server": true, 00:14:05.008 "enable_zerocopy_send_client": false, 00:14:05.008 "zerocopy_threshold": 0, 00:14:05.008 "tls_version": 0, 00:14:05.008 "enable_ktls": false 00:14:05.008 } 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "method": "sock_impl_set_options", 00:14:05.008 "params": { 00:14:05.008 "impl_name": "posix", 00:14:05.008 "recv_buf_size": 2097152, 00:14:05.008 "send_buf_size": 2097152, 00:14:05.008 "enable_recv_pipe": true, 00:14:05.008 "enable_quickack": false, 00:14:05.008 "enable_placement_id": 0, 00:14:05.008 "enable_zerocopy_send_server": true, 00:14:05.008 "enable_zerocopy_send_client": false, 00:14:05.008 "zerocopy_threshold": 0, 00:14:05.008 "tls_version": 0, 00:14:05.008 "enable_ktls": false 00:14:05.008 } 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "method": "sock_impl_set_options", 00:14:05.008 "params": { 00:14:05.008 "impl_name": "uring", 00:14:05.008 "recv_buf_size": 2097152, 00:14:05.008 "send_buf_size": 2097152, 00:14:05.008 "enable_recv_pipe": true, 00:14:05.008 "enable_quickack": false, 00:14:05.008 "enable_placement_id": 0, 00:14:05.008 "enable_zerocopy_send_server": false, 00:14:05.008 "enable_zerocopy_send_client": false, 00:14:05.008 "zerocopy_threshold": 0, 00:14:05.008 "tls_version": 0, 00:14:05.008 "enable_ktls": false 00:14:05.008 } 00:14:05.008 } 00:14:05.008 ] 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "subsystem": "vmd", 00:14:05.008 "config": [] 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "subsystem": "accel", 00:14:05.008 "config": [ 00:14:05.008 { 00:14:05.008 "method": "accel_set_options", 00:14:05.008 "params": { 00:14:05.008 "small_cache_size": 128, 00:14:05.008 "large_cache_size": 16, 00:14:05.008 "task_count": 2048, 00:14:05.008 "sequence_count": 2048, 00:14:05.008 "buf_count": 2048 00:14:05.008 } 00:14:05.008 } 00:14:05.008 ] 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "subsystem": "bdev", 00:14:05.008 "config": [ 00:14:05.008 { 00:14:05.008 "method": "bdev_set_options", 00:14:05.008 "params": { 00:14:05.008 "bdev_io_pool_size": 65535, 00:14:05.008 "bdev_io_cache_size": 256, 00:14:05.008 "bdev_auto_examine": true, 00:14:05.008 "iobuf_small_cache_size": 128, 00:14:05.008 "iobuf_large_cache_size": 16 00:14:05.008 } 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "method": "bdev_raid_set_options", 00:14:05.008 "params": { 00:14:05.008 "process_window_size_kb": 1024, 00:14:05.008 "process_max_bandwidth_mb_sec": 0 00:14:05.008 } 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "method": "bdev_iscsi_set_options", 00:14:05.008 "params": { 00:14:05.008 "timeout_sec": 30 00:14:05.008 } 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "method": "bdev_nvme_set_options", 00:14:05.008 "params": { 00:14:05.008 "action_on_timeout": "none", 00:14:05.008 "timeout_us": 0, 00:14:05.008 "timeout_admin_us": 0, 00:14:05.008 "keep_alive_timeout_ms": 10000, 00:14:05.008 "arbitration_burst": 0, 00:14:05.008 "low_priority_weight": 0, 00:14:05.008 "medium_priority_weight": 0, 00:14:05.008 "high_priority_weight": 0, 00:14:05.008 "nvme_adminq_poll_period_us": 10000, 00:14:05.008 "nvme_ioq_poll_period_us": 0, 00:14:05.008 "io_queue_requests": 512, 00:14:05.008 "delay_cmd_submit": true, 00:14:05.008 "transport_retry_count": 4, 00:14:05.008 "bdev_retry_count": 3, 00:14:05.008 "transport_ack_timeout": 0, 00:14:05.008 "ctrlr_loss_timeout_sec": 0, 00:14:05.008 "reconnect_delay_sec": 0, 00:14:05.008 "fast_io_fail_timeout_sec": 0, 00:14:05.008 "disable_auto_failback": false, 00:14:05.008 "generate_uuids": false, 00:14:05.008 "transport_tos": 0, 00:14:05.008 "nvme_error_stat": false, 00:14:05.008 "rdma_srq_size": 0, 00:14:05.008 "io_path_stat": false, 00:14:05.008 "allow_accel_sequence": false, 00:14:05.008 "rdma_max_cq_size": 0, 00:14:05.008 "rdma_cm_event_timeout_ms": 0, 00:14:05.008 "dhchap_digests": [ 00:14:05.008 "sha256", 00:14:05.008 "sha384", 00:14:05.008 "sha512" 00:14:05.008 ], 00:14:05.008 "dhchap_dhgroups": [ 00:14:05.008 "null", 00:14:05.008 "ffdhe2048", 00:14:05.008 "ffdhe3072", 00:14:05.008 "ffdhe4096", 00:14:05.008 "ffdhe6144", 00:14:05.008 "ffdhe8192" 00:14:05.008 ] 00:14:05.008 } 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "method": "bdev_nvme_attach_controller", 00:14:05.008 "params": { 00:14:05.008 "name": "nvme0", 00:14:05.008 "trtype": "TCP", 00:14:05.008 "adrfam": "IPv4", 00:14:05.008 "traddr": "10.0.0.3", 00:14:05.008 "trsvcid": "4420", 00:14:05.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.008 "prchk_reftag": false, 00:14:05.008 "prchk_guard": false, 00:14:05.008 "ctrlr_loss_timeout_sec": 0, 00:14:05.008 "reconnect_delay_sec": 0, 00:14:05.008 "fast_io_fail_timeout_sec": 0, 00:14:05.008 "psk": "key0", 00:14:05.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:05.008 "hdgst": false, 00:14:05.008 "ddgst": false, 00:14:05.008 "multipath": "multipath" 00:14:05.008 } 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "method": "bdev_nvme_set_hotplug", 00:14:05.008 "params": { 00:14:05.008 "period_us": 100000, 00:14:05.008 "enable": false 00:14:05.008 } 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "method": "bdev_enable_histogram", 00:14:05.008 "params": { 00:14:05.008 "name": "nvme0n1", 00:14:05.008 "enable": true 00:14:05.008 } 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "method": "bdev_wait_for_examine" 00:14:05.008 } 00:14:05.008 ] 00:14:05.008 }, 00:14:05.008 { 00:14:05.008 "subsystem": "nbd", 00:14:05.008 "config": [] 00:14:05.008 } 00:14:05.008 ] 00:14:05.008 }' 00:14:05.008 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 71948 00:14:05.008 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71948 ']' 00:14:05.009 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71948 00:14:05.009 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:05.009 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:05.009 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71948 00:14:05.009 killing process with pid 71948 00:14:05.009 Received shutdown signal, test time was about 1.000000 seconds 00:14:05.009 00:14:05.009 Latency(us) 00:14:05.009 [2024-11-12T10:35:53.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.009 [2024-11-12T10:35:53.767Z] =================================================================================================================== 00:14:05.009 [2024-11-12T10:35:53.767Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:05.009 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:05.009 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:05.009 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71948' 00:14:05.009 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71948 00:14:05.009 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71948 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 71925 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71925 ']' 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71925 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71925 00:14:05.267 killing process with pid 71925 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71925' 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71925 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71925 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:05.267 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:05.267 "subsystems": [ 00:14:05.267 { 00:14:05.267 "subsystem": "keyring", 00:14:05.267 "config": [ 00:14:05.267 { 00:14:05.267 "method": "keyring_file_add_key", 00:14:05.267 "params": { 00:14:05.267 "name": "key0", 00:14:05.267 "path": "/tmp/tmp.SCyMkIOFpD" 00:14:05.267 } 00:14:05.267 } 00:14:05.267 ] 00:14:05.267 }, 00:14:05.267 { 00:14:05.267 "subsystem": "iobuf", 00:14:05.267 "config": [ 00:14:05.267 { 00:14:05.267 "method": "iobuf_set_options", 00:14:05.267 "params": { 00:14:05.267 "small_pool_count": 8192, 00:14:05.267 "large_pool_count": 1024, 00:14:05.267 "small_bufsize": 8192, 00:14:05.267 "large_bufsize": 135168, 00:14:05.267 "enable_numa": false 00:14:05.267 } 00:14:05.267 } 00:14:05.267 ] 00:14:05.267 }, 00:14:05.267 { 00:14:05.267 "subsystem": "sock", 00:14:05.267 "config": [ 00:14:05.267 { 00:14:05.267 "method": "sock_set_default_impl", 00:14:05.267 "params": { 00:14:05.267 "impl_name": "uring" 00:14:05.267 } 00:14:05.267 }, 00:14:05.267 { 00:14:05.267 "method": "sock_impl_set_options", 00:14:05.267 "params": { 00:14:05.267 "impl_name": "ssl", 00:14:05.267 "recv_buf_size": 4096, 00:14:05.267 "send_buf_size": 4096, 00:14:05.267 "enable_recv_pipe": true, 00:14:05.267 "enable_quickack": false, 00:14:05.268 "enable_placement_id": 0, 00:14:05.268 "enable_zerocopy_send_server": true, 00:14:05.268 "enable_zerocopy_send_client": false, 00:14:05.268 "zerocopy_threshold": 0, 00:14:05.268 "tls_version": 0, 00:14:05.268 "enable_ktls": false 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "sock_impl_set_options", 00:14:05.268 "params": { 00:14:05.268 "impl_name": "posix", 00:14:05.268 "recv_buf_size": 2097152, 00:14:05.268 "send_buf_size": 2097152, 00:14:05.268 "enable_recv_pipe": true, 00:14:05.268 "enable_quickack": false, 00:14:05.268 "enable_placement_id": 0, 00:14:05.268 "enable_zerocopy_send_server": true, 00:14:05.268 "enable_zerocopy_send_client": false, 00:14:05.268 "zerocopy_threshold": 0, 00:14:05.268 "tls_version": 0, 00:14:05.268 "enable_ktls": false 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "sock_impl_set_options", 00:14:05.268 "params": { 00:14:05.268 "impl_name": "uring", 00:14:05.268 "recv_buf_size": 2097152, 00:14:05.268 "send_buf_size": 2097152, 00:14:05.268 "enable_recv_pipe": true, 00:14:05.268 "enable_quickack": false, 00:14:05.268 "enable_placement_id": 0, 00:14:05.268 "enable_zerocopy_send_server": false, 00:14:05.268 "enable_zerocopy_send_client": false, 00:14:05.268 "zerocopy_threshold": 0, 00:14:05.268 "tls_version": 0, 00:14:05.268 "enable_ktls": false 00:14:05.268 } 00:14:05.268 } 00:14:05.268 ] 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "subsystem": "vmd", 00:14:05.268 "config": [] 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "subsystem": "accel", 00:14:05.268 "config": [ 00:14:05.268 { 00:14:05.268 "method": "accel_set_options", 00:14:05.268 "params": { 00:14:05.268 "small_cache_size": 128, 00:14:05.268 "large_cache_size": 16, 00:14:05.268 "task_count": 2048, 00:14:05.268 "sequence_count": 2048, 00:14:05.268 "buf_count": 2048 00:14:05.268 } 00:14:05.268 } 00:14:05.268 ] 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "subsystem": "bdev", 00:14:05.268 "config": [ 00:14:05.268 { 00:14:05.268 "method": "bdev_set_options", 00:14:05.268 "params": { 00:14:05.268 "bdev_io_pool_size": 65535, 00:14:05.268 "bdev_io_cache_size": 256, 00:14:05.268 "bdev_auto_examine": true, 00:14:05.268 "iobuf_small_cache_size": 128, 00:14:05.268 "iobuf_large_cache_size": 16 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "bdev_raid_set_options", 00:14:05.268 "params": { 00:14:05.268 "process_window_size_kb": 1024, 00:14:05.268 "process_max_bandwidth_mb_sec": 0 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "bdev_iscsi_set_options", 00:14:05.268 "params": { 00:14:05.268 "timeout_sec": 30 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "bdev_nvme_set_options", 00:14:05.268 "params": { 00:14:05.268 "action_on_timeout": "none", 00:14:05.268 "timeout_us": 0, 00:14:05.268 "timeout_admin_us": 0, 00:14:05.268 "keep_alive_timeout_ms": 10000, 00:14:05.268 "arbitration_burst": 0, 00:14:05.268 "low_priority_weight": 0, 00:14:05.268 "medium_priority_weight": 0, 00:14:05.268 "high_priority_weight": 0, 00:14:05.268 "nvme_adminq_poll_period_us": 10000, 00:14:05.268 "nvme_ioq_poll_period_us": 0, 00:14:05.268 "io_queue_requests": 0, 00:14:05.268 "delay_cmd_submit": true, 00:14:05.268 "transport_retry_count": 4, 00:14:05.268 "bdev_retry_count": 3, 00:14:05.268 "transport_ack_timeout": 0, 00:14:05.268 "ctrlr_loss_timeout_sec": 0, 00:14:05.268 "reconnect_delay_sec": 0, 00:14:05.268 "fast_io_fail_timeout_sec": 0, 00:14:05.268 "disable_auto_failback": false, 00:14:05.268 "generate_uuids": false, 00:14:05.268 "transport_tos": 0, 00:14:05.268 "nvme_error_stat": false, 00:14:05.268 "rdma_srq_size": 0, 00:14:05.268 "io_path_stat": false, 00:14:05.268 "allow_accel_sequence": false, 00:14:05.268 "rdma_max_cq_size": 0, 00:14:05.268 "rdma_cm_event_timeout_ms": 0, 00:14:05.268 "dhchap_digests": [ 00:14:05.268 "sha256", 00:14:05.268 "sha384", 00:14:05.268 "sha512" 00:14:05.268 ], 00:14:05.268 "dhchap_dhgroups": [ 00:14:05.268 "null", 00:14:05.268 "ffdhe2048", 00:14:05.268 "ffdhe3072", 00:14:05.268 "ffdhe4096", 00:14:05.268 "ffdhe6144", 00:14:05.268 "ffdhe8192" 00:14:05.268 ] 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "bdev_nvme_set_hotplug", 00:14:05.268 "params": { 00:14:05.268 "period_us": 100000, 00:14:05.268 "enable": false 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "bdev_malloc_create", 00:14:05.268 "params": { 00:14:05.268 "name": "malloc0", 00:14:05.268 "num_blocks": 8192, 00:14:05.268 "block_size": 4096, 00:14:05.268 "physical_block_size": 4096, 00:14:05.268 "uuid": "141171cd-889e-44a4-a8bf-05b70c8ba3c5", 00:14:05.268 "optimal_io_boundary": 0, 00:14:05.268 "md_size": 0, 00:14:05.268 "dif_type": 0, 00:14:05.268 "dif_is_head_of_md": false, 00:14:05.268 "dif_pi_format": 0 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "bdev_wait_for_examine" 00:14:05.268 } 00:14:05.268 ] 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "subsystem": "nbd", 00:14:05.268 "config": [] 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "subsystem": "scheduler", 00:14:05.268 "config": [ 00:14:05.268 { 00:14:05.268 "method": "framework_set_scheduler", 00:14:05.268 "params": { 00:14:05.268 "name": "static" 00:14:05.268 } 00:14:05.268 } 00:14:05.268 ] 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "subsystem": "nvmf", 00:14:05.268 "config": [ 00:14:05.268 { 00:14:05.268 "method": "nvmf_set_config", 00:14:05.268 "params": { 00:14:05.268 "discovery_filter": "match_any", 00:14:05.268 "admin_cmd_passthru": { 00:14:05.268 "identify_ctrlr": false 00:14:05.268 }, 00:14:05.268 "dhchap_digests": [ 00:14:05.268 "sha256", 00:14:05.268 "sha384", 00:14:05.268 "sha512" 00:14:05.268 ], 00:14:05.268 "dhchap_dhgroups": [ 00:14:05.268 "null", 00:14:05.268 "ffdhe2048", 00:14:05.268 "ffdhe3072", 00:14:05.268 "ffdhe4096", 00:14:05.268 "ffdhe6144", 00:14:05.268 "ffdhe8192" 00:14:05.268 ] 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "nvmf_set_max_subsystems", 00:14:05.268 "params": { 00:14:05.268 "max_subsystems": 1024 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "nvmf_set_crdt", 00:14:05.268 "params": { 00:14:05.268 "crdt1": 0, 00:14:05.268 "crdt2": 0, 00:14:05.268 "crdt3": 0 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "nvmf_create_transport", 00:14:05.268 "params": { 00:14:05.268 "trtype": "TCP", 00:14:05.268 "max_queue_depth": 128, 00:14:05.268 "max_io_qpairs_per_ctrlr": 127, 00:14:05.268 "in_capsule_data_size": 4096, 00:14:05.268 "max_io_size": 131072, 00:14:05.268 "io_unit_size": 131072, 00:14:05.268 "max_aq_depth": 128, 00:14:05.268 "num_shared_buffers": 511, 00:14:05.268 "buf_cache_size": 4294967295, 00:14:05.268 "dif_insert_or_strip": false, 00:14:05.268 "zcopy": false, 00:14:05.268 "c2h_success": false, 00:14:05.268 "sock_priority": 0, 00:14:05.268 "abort_timeout_sec": 1, 00:14:05.268 "ack_timeout": 0, 00:14:05.268 "data_wr_pool_size": 0 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "nvmf_create_subsystem", 00:14:05.268 "params": { 00:14:05.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.268 "allow_any_host": false, 00:14:05.268 "serial_number": "00000000000000000000", 00:14:05.268 "model_number": "SPDK bdev Controller", 00:14:05.268 "max_namespaces": 32, 00:14:05.268 "min_cntlid": 1, 00:14:05.268 "max_cntlid": 65519, 00:14:05.268 "ana_reporting": false 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "nvmf_subsystem_add_host", 00:14:05.268 "params": { 00:14:05.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.268 "host": "nqn.2016-06.io.spdk:host1", 00:14:05.268 "psk": "key0" 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "nvmf_subsystem_add_ns", 00:14:05.268 "params": { 00:14:05.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.268 "namespace": { 00:14:05.268 "nsid": 1, 00:14:05.268 "bdev_name": "malloc0", 00:14:05.268 "nguid": "141171CD889E44A4A8BF05B70C8BA3C5", 00:14:05.268 "uuid": "141171cd-889e-44a4-a8bf-05b70c8ba3c5", 00:14:05.268 "no_auto_visible": false 00:14:05.268 } 00:14:05.268 } 00:14:05.268 }, 00:14:05.268 { 00:14:05.268 "method": "nvmf_subsystem_add_listener", 00:14:05.268 "params": { 00:14:05.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.269 "listen_address": { 00:14:05.269 "trtype": "TCP", 00:14:05.269 "adrfam": "IPv4", 00:14:05.269 "traddr": "10.0.0.3", 00:14:05.269 "trsvcid": "4420" 00:14:05.269 }, 00:14:05.269 "secure_channel": false, 00:14:05.269 "sock_impl": "ssl" 00:14:05.269 } 00:14:05.269 } 00:14:05.269 ] 00:14:05.269 } 00:14:05.269 ] 00:14:05.269 }' 00:14:05.269 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.269 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71997 00:14:05.269 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:05.269 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71997 00:14:05.269 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71997 ']' 00:14:05.269 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.269 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:05.269 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.269 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:05.269 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.269 [2024-11-12 10:35:54.013965] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:05.269 [2024-11-12 10:35:54.014078] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.527 [2024-11-12 10:35:54.159779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.527 [2024-11-12 10:35:54.188159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.527 [2024-11-12 10:35:54.188508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.527 [2024-11-12 10:35:54.188543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.527 [2024-11-12 10:35:54.188551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.527 [2024-11-12 10:35:54.188558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.527 [2024-11-12 10:35:54.188912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.786 [2024-11-12 10:35:54.330388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:05.786 [2024-11-12 10:35:54.389242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.786 [2024-11-12 10:35:54.421128] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:05.786 [2024-11-12 10:35:54.421394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:06.351 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:06.351 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:06.351 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:06.351 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:06.351 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.351 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.351 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72029 00:14:06.351 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72029 /var/tmp/bdevperf.sock 00:14:06.351 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72029 ']' 00:14:06.351 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.351 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:06.351 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.351 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:06.351 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.351 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:06.351 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:06.351 "subsystems": [ 00:14:06.351 { 00:14:06.351 "subsystem": "keyring", 00:14:06.351 "config": [ 00:14:06.351 { 00:14:06.351 "method": "keyring_file_add_key", 00:14:06.351 "params": { 00:14:06.351 "name": "key0", 00:14:06.351 "path": "/tmp/tmp.SCyMkIOFpD" 00:14:06.351 } 00:14:06.351 } 00:14:06.351 ] 00:14:06.351 }, 00:14:06.351 { 00:14:06.351 "subsystem": "iobuf", 00:14:06.351 "config": [ 00:14:06.351 { 00:14:06.351 "method": "iobuf_set_options", 00:14:06.351 "params": { 00:14:06.351 "small_pool_count": 8192, 00:14:06.351 "large_pool_count": 1024, 00:14:06.351 "small_bufsize": 8192, 00:14:06.351 "large_bufsize": 135168, 00:14:06.351 "enable_numa": false 00:14:06.351 } 00:14:06.351 } 00:14:06.351 ] 00:14:06.351 }, 00:14:06.351 { 00:14:06.351 "subsystem": "sock", 00:14:06.351 "config": [ 00:14:06.351 { 00:14:06.351 "method": "sock_set_default_impl", 00:14:06.351 "params": { 00:14:06.351 "impl_name": "uring" 00:14:06.351 } 00:14:06.351 }, 00:14:06.351 { 00:14:06.351 "method": "sock_impl_set_options", 00:14:06.351 "params": { 00:14:06.351 "impl_name": "ssl", 00:14:06.351 "recv_buf_size": 4096, 00:14:06.351 "send_buf_size": 4096, 00:14:06.351 "enable_recv_pipe": true, 00:14:06.351 "enable_quickack": false, 00:14:06.351 "enable_placement_id": 0, 00:14:06.351 "enable_zerocopy_send_server": true, 00:14:06.351 "enable_zerocopy_send_client": false, 00:14:06.351 "zerocopy_threshold": 0, 00:14:06.351 "tls_version": 0, 00:14:06.351 "enable_ktls": false 00:14:06.351 } 00:14:06.351 }, 00:14:06.351 { 00:14:06.351 "method": "sock_impl_set_options", 00:14:06.351 "params": { 00:14:06.351 "impl_name": "posix", 00:14:06.351 "recv_buf_size": 2097152, 00:14:06.351 "send_buf_size": 2097152, 00:14:06.351 "enable_recv_pipe": true, 00:14:06.351 "enable_quickack": false, 00:14:06.351 "enable_placement_id": 0, 00:14:06.351 "enable_zerocopy_send_server": true, 00:14:06.351 "enable_zerocopy_send_client": false, 00:14:06.351 "zerocopy_threshold": 0, 00:14:06.351 "tls_version": 0, 00:14:06.352 "enable_ktls": false 00:14:06.352 } 00:14:06.352 }, 00:14:06.352 { 00:14:06.352 "method": "sock_impl_set_options", 00:14:06.352 "params": { 00:14:06.352 "impl_name": "uring", 00:14:06.352 "recv_buf_size": 2097152, 00:14:06.352 "send_buf_size": 2097152, 00:14:06.352 "enable_recv_pipe": true, 00:14:06.352 "enable_quickack": false, 00:14:06.352 "enable_placement_id": 0, 00:14:06.352 "enable_zerocopy_send_server": false, 00:14:06.352 "enable_zerocopy_send_client": false, 00:14:06.352 "zerocopy_threshold": 0, 00:14:06.352 "tls_version": 0, 00:14:06.352 "enable_ktls": false 00:14:06.352 } 00:14:06.352 } 00:14:06.352 ] 00:14:06.352 }, 00:14:06.352 { 00:14:06.352 "subsystem": "vmd", 00:14:06.352 "config": [] 00:14:06.352 }, 00:14:06.352 { 00:14:06.352 "subsystem": "accel", 00:14:06.352 "config": [ 00:14:06.352 { 00:14:06.352 "method": "accel_set_options", 00:14:06.352 "params": { 00:14:06.352 "small_cache_size": 128, 00:14:06.352 "large_cache_size": 16, 00:14:06.352 "task_count": 2048, 00:14:06.352 "sequence_count": 2048, 00:14:06.352 "buf_count": 2048 00:14:06.352 } 00:14:06.352 } 00:14:06.352 ] 00:14:06.352 }, 00:14:06.352 { 00:14:06.352 "subsystem": "bdev", 00:14:06.352 "config": [ 00:14:06.352 { 00:14:06.352 "method": "bdev_set_options", 00:14:06.352 "params": { 00:14:06.352 "bdev_io_pool_size": 65535, 00:14:06.352 "bdev_io_cache_size": 256, 00:14:06.352 "bdev_auto_examine": true, 00:14:06.352 "iobuf_small_cache_size": 128, 00:14:06.352 "iobuf_large_cache_size": 16 00:14:06.352 } 00:14:06.352 }, 00:14:06.352 { 00:14:06.352 "method": "bdev_raid_set_options", 00:14:06.352 "params": { 00:14:06.352 "process_window_size_kb": 1024, 00:14:06.352 "process_max_bandwidth_mb_sec": 0 00:14:06.352 } 00:14:06.352 }, 00:14:06.352 { 00:14:06.352 "method": "bdev_iscsi_set_options", 00:14:06.352 "params": { 00:14:06.352 "timeout_sec": 30 00:14:06.352 } 00:14:06.352 }, 00:14:06.352 { 00:14:06.352 "method": "bdev_nvme_set_options", 00:14:06.352 "params": { 00:14:06.352 "action_on_timeout": "none", 00:14:06.352 "timeout_us": 0, 00:14:06.352 "timeout_admin_us": 0, 00:14:06.352 "keep_alive_timeout_ms": 10000, 00:14:06.352 "arbitration_burst": 0, 00:14:06.352 "low_priority_weight": 0, 00:14:06.352 "medium_priority_weight": 0, 00:14:06.352 "high_priority_weight": 0, 00:14:06.352 "nvme_adminq_poll_period_us": 10000, 00:14:06.352 "nvme_ioq_poll_period_us": 0, 00:14:06.352 "io_queue_requests": 512, 00:14:06.352 "delay_cmd_submit": true, 00:14:06.352 "transport_retry_count": 4, 00:14:06.352 "bdev_retry_count": 3, 00:14:06.352 "transport_ack_timeout": 0, 00:14:06.352 "ctrlr_loss_timeout_sec": 0, 00:14:06.352 "reconnect_delay_sec": 0, 00:14:06.352 "fast_io_fail_timeout_sec": 0, 00:14:06.352 "disable_auto_failback": false, 00:14:06.352 "generate_uuids": false, 00:14:06.352 "transport_tos": 0, 00:14:06.352 "nvme_error_stat": false, 00:14:06.352 "rdma_srq_size": 0, 00:14:06.352 "io_path_stat": false, 00:14:06.352 "allow_accel_sequence": false, 00:14:06.352 "rdma_max_cq_size": 0, 00:14:06.352 "rdma_cm_event_timeout_ms": 0, 00:14:06.352 "dhchap_digests": [ 00:14:06.352 "sha256", 00:14:06.352 "sha384", 00:14:06.352 "sha512" 00:14:06.352 ], 00:14:06.352 "dhchap_dhgroups": [ 00:14:06.352 "null", 00:14:06.352 "ffdhe2048", 00:14:06.352 "ffdhe3072", 00:14:06.352 "ffdhe4096", 00:14:06.352 "ffdhe6144", 00:14:06.352 "ffdhe8192" 00:14:06.352 ] 00:14:06.352 } 00:14:06.352 }, 00:14:06.352 { 00:14:06.352 "method": "bdev_nvme_attach_controller", 00:14:06.352 "params": { 00:14:06.352 "name": "nvme0", 00:14:06.352 "trtype": "TCP", 00:14:06.352 "adrfam": "IPv4", 00:14:06.352 "traddr": "10.0.0.3", 00:14:06.352 "trsvcid": "4420", 00:14:06.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.352 "prchk_reftag": false, 00:14:06.352 "prchk_guard": false, 00:14:06.352 "ctrlr_loss_timeout_sec": 0, 00:14:06.352 "reconnect_delay_sec": 0, 00:14:06.352 "fast_io_fail_timeout_sec": 0, 00:14:06.352 "psk": "key0", 00:14:06.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.352 "hdgst": false, 00:14:06.352 "ddgst": false, 00:14:06.352 "multipath": "multipath" 00:14:06.352 } 00:14:06.352 }, 00:14:06.352 { 00:14:06.352 "method": "bdev_nvme_set_hotplug", 00:14:06.352 "params": { 00:14:06.352 "period_us": 100000, 00:14:06.352 "enable": false 00:14:06.352 } 00:14:06.352 }, 00:14:06.352 { 00:14:06.352 "method": "bdev_enable_histogram", 00:14:06.352 "params": { 00:14:06.352 "name": "nvme0n1", 00:14:06.352 "enable": true 00:14:06.352 } 00:14:06.352 }, 00:14:06.352 { 00:14:06.352 "method": "bdev_wait_for_examine" 00:14:06.352 } 00:14:06.352 ] 00:14:06.352 }, 00:14:06.352 { 00:14:06.352 "subsystem": "nbd", 00:14:06.352 "config": [] 00:14:06.352 } 00:14:06.352 ] 00:14:06.352 }' 00:14:06.352 [2024-11-12 10:35:55.065391] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:06.352 [2024-11-12 10:35:55.065715] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72029 ] 00:14:06.610 [2024-11-12 10:35:55.219004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.610 [2024-11-12 10:35:55.257724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.868 [2024-11-12 10:35:55.372299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:06.868 [2024-11-12 10:35:55.404443] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:07.434 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:07.434 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:07.434 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:07.434 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:07.693 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.693 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:07.693 Running I/O for 1 seconds... 00:14:09.070 4480.00 IOPS, 17.50 MiB/s 00:14:09.071 Latency(us) 00:14:09.071 [2024-11-12T10:35:57.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.071 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:09.071 Verification LBA range: start 0x0 length 0x2000 00:14:09.071 nvme0n1 : 1.02 4497.12 17.57 0.00 0.00 28155.04 7298.33 19541.64 00:14:09.071 [2024-11-12T10:35:57.829Z] =================================================================================================================== 00:14:09.071 [2024-11-12T10:35:57.829Z] Total : 4497.12 17.57 0.00 0.00 28155.04 7298.33 19541.64 00:14:09.071 { 00:14:09.071 "results": [ 00:14:09.071 { 00:14:09.071 "job": "nvme0n1", 00:14:09.071 "core_mask": "0x2", 00:14:09.071 "workload": "verify", 00:14:09.071 "status": "finished", 00:14:09.071 "verify_range": { 00:14:09.071 "start": 0, 00:14:09.071 "length": 8192 00:14:09.071 }, 00:14:09.071 "queue_depth": 128, 00:14:09.071 "io_size": 4096, 00:14:09.071 "runtime": 1.024655, 00:14:09.071 "iops": 4497.123422029854, 00:14:09.071 "mibps": 17.566888367304117, 00:14:09.071 "io_failed": 0, 00:14:09.071 "io_timeout": 0, 00:14:09.071 "avg_latency_us": 28155.035151515152, 00:14:09.071 "min_latency_us": 7298.327272727272, 00:14:09.071 "max_latency_us": 19541.643636363635 00:14:09.071 } 00:14:09.071 ], 00:14:09.071 "core_count": 1 00:14:09.071 } 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:09.071 nvmf_trace.0 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72029 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72029 ']' 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72029 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72029 00:14:09.071 killing process with pid 72029 00:14:09.071 Received shutdown signal, test time was about 1.000000 seconds 00:14:09.071 00:14:09.071 Latency(us) 00:14:09.071 [2024-11-12T10:35:57.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.071 [2024-11-12T10:35:57.829Z] =================================================================================================================== 00:14:09.071 [2024-11-12T10:35:57.829Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72029' 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72029 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72029 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.071 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.071 rmmod nvme_tcp 00:14:09.071 rmmod nvme_fabrics 00:14:09.330 rmmod nvme_keyring 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 71997 ']' 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 71997 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71997 ']' 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71997 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71997 00:14:09.330 killing process with pid 71997 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71997' 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71997 00:14:09.330 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71997 00:14:09.330 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:09.330 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:09.330 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:09.330 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:09.330 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:09.330 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:09.330 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:09.330 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:09.330 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:09.330 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:09.330 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:09.330 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.YgMyJpVa4m /tmp/tmp.VlN36E5DlV /tmp/tmp.SCyMkIOFpD 00:14:09.590 00:14:09.590 real 1m20.909s 00:14:09.590 user 2m11.413s 00:14:09.590 sys 0m26.218s 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:09.590 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.590 ************************************ 00:14:09.590 END TEST nvmf_tls 00:14:09.590 ************************************ 00:14:09.850 10:35:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:09.850 10:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:09.850 10:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:09.850 10:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.850 ************************************ 00:14:09.850 START TEST nvmf_fips 00:14:09.850 ************************************ 00:14:09.850 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:09.850 * Looking for test storage... 00:14:09.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:09.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.851 --rc genhtml_branch_coverage=1 00:14:09.851 --rc genhtml_function_coverage=1 00:14:09.851 --rc genhtml_legend=1 00:14:09.851 --rc geninfo_all_blocks=1 00:14:09.851 --rc geninfo_unexecuted_blocks=1 00:14:09.851 00:14:09.851 ' 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:09.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.851 --rc genhtml_branch_coverage=1 00:14:09.851 --rc genhtml_function_coverage=1 00:14:09.851 --rc genhtml_legend=1 00:14:09.851 --rc geninfo_all_blocks=1 00:14:09.851 --rc geninfo_unexecuted_blocks=1 00:14:09.851 00:14:09.851 ' 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:09.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.851 --rc genhtml_branch_coverage=1 00:14:09.851 --rc genhtml_function_coverage=1 00:14:09.851 --rc genhtml_legend=1 00:14:09.851 --rc geninfo_all_blocks=1 00:14:09.851 --rc geninfo_unexecuted_blocks=1 00:14:09.851 00:14:09.851 ' 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:09.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.851 --rc genhtml_branch_coverage=1 00:14:09.851 --rc genhtml_function_coverage=1 00:14:09.851 --rc genhtml_legend=1 00:14:09.851 --rc geninfo_all_blocks=1 00:14:09.851 --rc geninfo_unexecuted_blocks=1 00:14:09.851 00:14:09.851 ' 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.851 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.851 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.852 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:10.112 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:14:10.113 Error setting digest 00:14:10.113 40B21E3A067F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:10.113 40B21E3A067F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:10.113 Cannot find device "nvmf_init_br" 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:10.113 Cannot find device "nvmf_init_br2" 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:10.113 Cannot find device "nvmf_tgt_br" 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.113 Cannot find device "nvmf_tgt_br2" 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:10.113 Cannot find device "nvmf_init_br" 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:10.113 Cannot find device "nvmf_init_br2" 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:10.113 Cannot find device "nvmf_tgt_br" 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:10.113 Cannot find device "nvmf_tgt_br2" 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:10.113 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:10.373 Cannot find device "nvmf_br" 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:10.373 Cannot find device "nvmf_init_if" 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:10.373 Cannot find device "nvmf_init_if2" 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:10.373 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:10.373 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:10.374 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:10.374 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:10.374 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:10.633 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:10.633 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:14:10.633 00:14:10.633 --- 10.0.0.3 ping statistics --- 00:14:10.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.633 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:10.633 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:10.633 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:14:10.633 00:14:10.633 --- 10.0.0.4 ping statistics --- 00:14:10.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.633 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:10.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:10.633 00:14:10.633 --- 10.0.0.1 ping statistics --- 00:14:10.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.633 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:10.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:10.633 00:14:10.633 --- 10.0.0.2 ping statistics --- 00:14:10.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.633 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72350 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72350 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72350 ']' 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:10.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.633 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:10.634 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:10.634 [2024-11-12 10:35:59.280613] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:10.634 [2024-11-12 10:35:59.281408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.893 [2024-11-12 10:35:59.438463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.893 [2024-11-12 10:35:59.477106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.893 [2024-11-12 10:35:59.477166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.893 [2024-11-12 10:35:59.477202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.893 [2024-11-12 10:35:59.477224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.893 [2024-11-12 10:35:59.477233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.893 [2024-11-12 10:35:59.477610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.893 [2024-11-12 10:35:59.512568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:11.461 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:11.461 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:14:11.461 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:11.461 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:11.461 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:11.720 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.720 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:11.720 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:11.720 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:11.720 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.yKK 00:14:11.720 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:11.720 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.yKK 00:14:11.720 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.yKK 00:14:11.720 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.yKK 00:14:11.720 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:11.980 [2024-11-12 10:36:00.579837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.980 [2024-11-12 10:36:00.595807] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:11.980 [2024-11-12 10:36:00.596018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:11.980 malloc0 00:14:11.980 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:11.980 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72386 00:14:11.980 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:11.980 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72386 /var/tmp/bdevperf.sock 00:14:11.980 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72386 ']' 00:14:11.980 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.980 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:11.980 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.980 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:11.980 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:11.980 [2024-11-12 10:36:00.730664] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:11.980 [2024-11-12 10:36:00.730758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72386 ] 00:14:12.239 [2024-11-12 10:36:00.880020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.239 [2024-11-12 10:36:00.919539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.239 [2024-11-12 10:36:00.952604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:12.498 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:12.498 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:14:12.498 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.yKK 00:14:12.498 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:12.757 [2024-11-12 10:36:01.469668] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:13.016 TLSTESTn1 00:14:13.016 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:13.016 Running I/O for 10 seconds... 00:14:15.333 4151.00 IOPS, 16.21 MiB/s [2024-11-12T10:36:05.032Z] 4267.50 IOPS, 16.67 MiB/s [2024-11-12T10:36:05.972Z] 4274.67 IOPS, 16.70 MiB/s [2024-11-12T10:36:06.911Z] 4089.50 IOPS, 15.97 MiB/s [2024-11-12T10:36:07.850Z] 4058.40 IOPS, 15.85 MiB/s [2024-11-12T10:36:08.788Z] 4123.67 IOPS, 16.11 MiB/s [2024-11-12T10:36:09.725Z] 4105.29 IOPS, 16.04 MiB/s [2024-11-12T10:36:10.663Z] 4155.38 IOPS, 16.23 MiB/s [2024-11-12T10:36:11.662Z] 4191.56 IOPS, 16.37 MiB/s [2024-11-12T10:36:11.922Z] 4209.90 IOPS, 16.44 MiB/s 00:14:23.164 Latency(us) 00:14:23.164 [2024-11-12T10:36:11.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.164 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:23.164 Verification LBA range: start 0x0 length 0x2000 00:14:23.164 TLSTESTn1 : 10.02 4215.62 16.47 0.00 0.00 30309.81 5540.77 26095.24 00:14:23.164 [2024-11-12T10:36:11.922Z] =================================================================================================================== 00:14:23.164 [2024-11-12T10:36:11.922Z] Total : 4215.62 16.47 0.00 0.00 30309.81 5540.77 26095.24 00:14:23.164 { 00:14:23.164 "results": [ 00:14:23.164 { 00:14:23.164 "job": "TLSTESTn1", 00:14:23.164 "core_mask": "0x4", 00:14:23.164 "workload": "verify", 00:14:23.164 "status": "finished", 00:14:23.164 "verify_range": { 00:14:23.164 "start": 0, 00:14:23.164 "length": 8192 00:14:23.164 }, 00:14:23.164 "queue_depth": 128, 00:14:23.164 "io_size": 4096, 00:14:23.164 "runtime": 10.016788, 00:14:23.164 "iops": 4215.622812422505, 00:14:23.164 "mibps": 16.46727661102541, 00:14:23.164 "io_failed": 0, 00:14:23.164 "io_timeout": 0, 00:14:23.164 "avg_latency_us": 30309.81341726642, 00:14:23.164 "min_latency_us": 5540.770909090909, 00:14:23.164 "max_latency_us": 26095.243636363637 00:14:23.164 } 00:14:23.164 ], 00:14:23.164 "core_count": 1 00:14:23.164 } 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:23.164 nvmf_trace.0 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72386 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72386 ']' 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72386 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72386 00:14:23.164 killing process with pid 72386 00:14:23.164 Received shutdown signal, test time was about 10.000000 seconds 00:14:23.164 00:14:23.164 Latency(us) 00:14:23.164 [2024-11-12T10:36:11.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.164 [2024-11-12T10:36:11.922Z] =================================================================================================================== 00:14:23.164 [2024-11-12T10:36:11.922Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:23.164 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:23.165 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:23.165 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72386' 00:14:23.165 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72386 00:14:23.165 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72386 00:14:23.424 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:23.424 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:23.424 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:23.424 rmmod nvme_tcp 00:14:23.424 rmmod nvme_fabrics 00:14:23.424 rmmod nvme_keyring 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72350 ']' 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72350 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72350 ']' 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72350 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72350 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:23.424 killing process with pid 72350 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72350' 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72350 00:14:23.424 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72350 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:23.684 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.yKK 00:14:23.943 ************************************ 00:14:23.943 END TEST nvmf_fips 00:14:23.943 ************************************ 00:14:23.943 00:14:23.943 real 0m14.165s 00:14:23.943 user 0m19.046s 00:14:23.943 sys 0m5.656s 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:23.943 ************************************ 00:14:23.943 START TEST nvmf_control_msg_list 00:14:23.943 ************************************ 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:23.943 * Looking for test storage... 00:14:23.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:23.943 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:24.204 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:24.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.205 --rc genhtml_branch_coverage=1 00:14:24.205 --rc genhtml_function_coverage=1 00:14:24.205 --rc genhtml_legend=1 00:14:24.205 --rc geninfo_all_blocks=1 00:14:24.205 --rc geninfo_unexecuted_blocks=1 00:14:24.205 00:14:24.205 ' 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:24.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.205 --rc genhtml_branch_coverage=1 00:14:24.205 --rc genhtml_function_coverage=1 00:14:24.205 --rc genhtml_legend=1 00:14:24.205 --rc geninfo_all_blocks=1 00:14:24.205 --rc geninfo_unexecuted_blocks=1 00:14:24.205 00:14:24.205 ' 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:24.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.205 --rc genhtml_branch_coverage=1 00:14:24.205 --rc genhtml_function_coverage=1 00:14:24.205 --rc genhtml_legend=1 00:14:24.205 --rc geninfo_all_blocks=1 00:14:24.205 --rc geninfo_unexecuted_blocks=1 00:14:24.205 00:14:24.205 ' 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:24.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.205 --rc genhtml_branch_coverage=1 00:14:24.205 --rc genhtml_function_coverage=1 00:14:24.205 --rc genhtml_legend=1 00:14:24.205 --rc geninfo_all_blocks=1 00:14:24.205 --rc geninfo_unexecuted_blocks=1 00:14:24.205 00:14:24.205 ' 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:24.205 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:24.205 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:24.206 Cannot find device "nvmf_init_br" 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:24.206 Cannot find device "nvmf_init_br2" 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:24.206 Cannot find device "nvmf_tgt_br" 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:24.206 Cannot find device "nvmf_tgt_br2" 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:24.206 Cannot find device "nvmf_init_br" 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:24.206 Cannot find device "nvmf_init_br2" 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:24.206 Cannot find device "nvmf_tgt_br" 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:24.206 Cannot find device "nvmf_tgt_br2" 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:24.206 Cannot find device "nvmf_br" 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:24.206 Cannot find device "nvmf_init_if" 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:24.206 Cannot find device "nvmf_init_if2" 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:24.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:24.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:24.206 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:24.466 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:24.466 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:24.466 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:24.466 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:24.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:24.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:14:24.466 00:14:24.466 --- 10.0.0.3 ping statistics --- 00:14:24.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.466 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:24.466 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:24.466 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:14:24.466 00:14:24.466 --- 10.0.0.4 ping statistics --- 00:14:24.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.466 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:24.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:24.466 00:14:24.466 --- 10.0.0.1 ping statistics --- 00:14:24.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.466 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:24.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:14:24.466 00:14:24.466 --- 10.0.0.2 ping statistics --- 00:14:24.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.466 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:24.466 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:24.467 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:24.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.467 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=72778 00:14:24.467 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:24.467 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 72778 00:14:24.467 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 72778 ']' 00:14:24.467 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.467 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:24.467 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.467 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:24.467 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:24.726 [2024-11-12 10:36:13.261944] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:24.726 [2024-11-12 10:36:13.262277] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.726 [2024-11-12 10:36:13.406922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.726 [2024-11-12 10:36:13.444706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.726 [2024-11-12 10:36:13.445016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.726 [2024-11-12 10:36:13.445280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.726 [2024-11-12 10:36:13.445518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.726 [2024-11-12 10:36:13.445539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.726 [2024-11-12 10:36:13.445892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.726 [2024-11-12 10:36:13.478266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.985 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:24.986 [2024-11-12 10:36:13.588049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:24.986 Malloc0 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:24.986 [2024-11-12 10:36:13.623659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=72797 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=72798 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=72799 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:24.986 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 72797 00:14:25.245 [2024-11-12 10:36:13.812352] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:25.245 [2024-11-12 10:36:13.812957] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:25.245 [2024-11-12 10:36:13.813436] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:26.183 Initializing NVMe Controllers 00:14:26.183 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:26.183 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:26.183 Initialization complete. Launching workers. 00:14:26.183 ======================================================== 00:14:26.183 Latency(us) 00:14:26.183 Device Information : IOPS MiB/s Average min max 00:14:26.183 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3625.97 14.16 275.47 199.61 756.95 00:14:26.183 ======================================================== 00:14:26.183 Total : 3625.97 14.16 275.47 199.61 756.95 00:14:26.183 00:14:26.183 Initializing NVMe Controllers 00:14:26.184 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:26.184 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:26.184 Initialization complete. Launching workers. 00:14:26.184 ======================================================== 00:14:26.184 Latency(us) 00:14:26.184 Device Information : IOPS MiB/s Average min max 00:14:26.184 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3630.00 14.18 275.09 189.32 513.44 00:14:26.184 ======================================================== 00:14:26.184 Total : 3630.00 14.18 275.09 189.32 513.44 00:14:26.184 00:14:26.184 Initializing NVMe Controllers 00:14:26.184 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:26.184 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:26.184 Initialization complete. Launching workers. 00:14:26.184 ======================================================== 00:14:26.184 Latency(us) 00:14:26.184 Device Information : IOPS MiB/s Average min max 00:14:26.184 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3625.00 14.16 275.41 208.79 630.68 00:14:26.184 ======================================================== 00:14:26.184 Total : 3625.00 14.16 275.41 208.79 630.68 00:14:26.184 00:14:26.184 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 72798 00:14:26.184 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 72799 00:14:26.184 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:26.184 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:26.184 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:26.184 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:26.184 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:26.184 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:26.184 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:26.184 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:26.184 rmmod nvme_tcp 00:14:26.184 rmmod nvme_fabrics 00:14:26.184 rmmod nvme_keyring 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 72778 ']' 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 72778 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 72778 ']' 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 72778 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72778 00:14:26.444 killing process with pid 72778 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72778' 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 72778 00:14:26.444 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 72778 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:26.444 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:26.703 00:14:26.703 real 0m2.806s 00:14:26.703 user 0m4.661s 00:14:26.703 sys 0m1.328s 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:26.703 ************************************ 00:14:26.703 END TEST nvmf_control_msg_list 00:14:26.703 ************************************ 00:14:26.703 10:36:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:26.704 10:36:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:26.704 10:36:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:26.704 10:36:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:26.704 ************************************ 00:14:26.704 START TEST nvmf_wait_for_buf 00:14:26.704 ************************************ 00:14:26.704 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:26.964 * Looking for test storage... 00:14:26.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:26.964 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:26.964 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:14:26.964 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:26.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.965 --rc genhtml_branch_coverage=1 00:14:26.965 --rc genhtml_function_coverage=1 00:14:26.965 --rc genhtml_legend=1 00:14:26.965 --rc geninfo_all_blocks=1 00:14:26.965 --rc geninfo_unexecuted_blocks=1 00:14:26.965 00:14:26.965 ' 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:26.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.965 --rc genhtml_branch_coverage=1 00:14:26.965 --rc genhtml_function_coverage=1 00:14:26.965 --rc genhtml_legend=1 00:14:26.965 --rc geninfo_all_blocks=1 00:14:26.965 --rc geninfo_unexecuted_blocks=1 00:14:26.965 00:14:26.965 ' 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:26.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.965 --rc genhtml_branch_coverage=1 00:14:26.965 --rc genhtml_function_coverage=1 00:14:26.965 --rc genhtml_legend=1 00:14:26.965 --rc geninfo_all_blocks=1 00:14:26.965 --rc geninfo_unexecuted_blocks=1 00:14:26.965 00:14:26.965 ' 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:26.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.965 --rc genhtml_branch_coverage=1 00:14:26.965 --rc genhtml_function_coverage=1 00:14:26.965 --rc genhtml_legend=1 00:14:26.965 --rc geninfo_all_blocks=1 00:14:26.965 --rc geninfo_unexecuted_blocks=1 00:14:26.965 00:14:26.965 ' 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:26.965 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:26.966 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:26.966 Cannot find device "nvmf_init_br" 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:26.966 Cannot find device "nvmf_init_br2" 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:26.966 Cannot find device "nvmf_tgt_br" 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:26.966 Cannot find device "nvmf_tgt_br2" 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:26.966 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:27.225 Cannot find device "nvmf_init_br" 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:27.226 Cannot find device "nvmf_init_br2" 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:27.226 Cannot find device "nvmf_tgt_br" 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:27.226 Cannot find device "nvmf_tgt_br2" 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:27.226 Cannot find device "nvmf_br" 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:27.226 Cannot find device "nvmf_init_if" 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:27.226 Cannot find device "nvmf_init_if2" 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:27.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:27.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:27.226 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:27.486 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:27.486 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:27.486 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:27.486 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:27.486 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:14:27.486 00:14:27.486 --- 10.0.0.3 ping statistics --- 00:14:27.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.486 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:27.486 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:27.486 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:14:27.486 00:14:27.486 --- 10.0.0.4 ping statistics --- 00:14:27.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.486 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:27.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:14:27.486 00:14:27.486 --- 10.0.0.1 ping statistics --- 00:14:27.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.486 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:27.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:14:27.486 00:14:27.486 --- 10.0.0.2 ping statistics --- 00:14:27.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.486 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73038 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73038 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 73038 ']' 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:27.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:27.486 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:27.486 [2024-11-12 10:36:16.165222] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:27.486 [2024-11-12 10:36:16.165309] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.746 [2024-11-12 10:36:16.307960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.746 [2024-11-12 10:36:16.336771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.746 [2024-11-12 10:36:16.336866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.746 [2024-11-12 10:36:16.336893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.746 [2024-11-12 10:36:16.336901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.746 [2024-11-12 10:36:16.336908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.746 [2024-11-12 10:36:16.337203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.746 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:28.005 [2024-11-12 10:36:16.508826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:28.005 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.005 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:28.006 Malloc0 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:28.006 [2024-11-12 10:36:16.551834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:28.006 [2024-11-12 10:36:16.575931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.006 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:28.265 [2024-11-12 10:36:16.774308] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:29.644 Initializing NVMe Controllers 00:14:29.644 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:29.644 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:29.644 Initialization complete. Launching workers. 00:14:29.644 ======================================================== 00:14:29.644 Latency(us) 00:14:29.644 Device Information : IOPS MiB/s Average min max 00:14:29.644 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 508.50 63.56 7866.09 4348.34 11072.88 00:14:29.644 ======================================================== 00:14:29.644 Total : 508.50 63.56 7866.09 4348.34 11072.88 00:14:29.644 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4858 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4858 -eq 0 ]] 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:29.644 rmmod nvme_tcp 00:14:29.644 rmmod nvme_fabrics 00:14:29.644 rmmod nvme_keyring 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73038 ']' 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73038 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 73038 ']' 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 73038 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:29.644 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73038 00:14:29.645 killing process with pid 73038 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73038' 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 73038 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 73038 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:29.645 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:29.905 00:14:29.905 real 0m3.161s 00:14:29.905 user 0m2.556s 00:14:29.905 sys 0m0.725s 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:29.905 ************************************ 00:14:29.905 END TEST nvmf_wait_for_buf 00:14:29.905 ************************************ 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:29.905 ************************************ 00:14:29.905 START TEST nvmf_nsid 00:14:29.905 ************************************ 00:14:29.905 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:30.166 * Looking for test storage... 00:14:30.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:30.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.166 --rc genhtml_branch_coverage=1 00:14:30.166 --rc genhtml_function_coverage=1 00:14:30.166 --rc genhtml_legend=1 00:14:30.166 --rc geninfo_all_blocks=1 00:14:30.166 --rc geninfo_unexecuted_blocks=1 00:14:30.166 00:14:30.166 ' 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:30.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.166 --rc genhtml_branch_coverage=1 00:14:30.166 --rc genhtml_function_coverage=1 00:14:30.166 --rc genhtml_legend=1 00:14:30.166 --rc geninfo_all_blocks=1 00:14:30.166 --rc geninfo_unexecuted_blocks=1 00:14:30.166 00:14:30.166 ' 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:30.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.166 --rc genhtml_branch_coverage=1 00:14:30.166 --rc genhtml_function_coverage=1 00:14:30.166 --rc genhtml_legend=1 00:14:30.166 --rc geninfo_all_blocks=1 00:14:30.166 --rc geninfo_unexecuted_blocks=1 00:14:30.166 00:14:30.166 ' 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:30.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.166 --rc genhtml_branch_coverage=1 00:14:30.166 --rc genhtml_function_coverage=1 00:14:30.166 --rc genhtml_legend=1 00:14:30.166 --rc geninfo_all_blocks=1 00:14:30.166 --rc geninfo_unexecuted_blocks=1 00:14:30.166 00:14:30.166 ' 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.166 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.166 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:30.167 Cannot find device "nvmf_init_br" 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:30.167 Cannot find device "nvmf_init_br2" 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:14:30.167 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:30.425 Cannot find device "nvmf_tgt_br" 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:30.425 Cannot find device "nvmf_tgt_br2" 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:30.425 Cannot find device "nvmf_init_br" 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:30.425 Cannot find device "nvmf_init_br2" 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:30.425 Cannot find device "nvmf_tgt_br" 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:30.425 Cannot find device "nvmf_tgt_br2" 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:30.425 Cannot find device "nvmf_br" 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:14:30.425 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:30.425 Cannot find device "nvmf_init_if" 00:14:30.425 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:14:30.425 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:30.425 Cannot find device "nvmf_init_if2" 00:14:30.425 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:14:30.425 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:30.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.425 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:30.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:30.426 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:30.685 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:30.685 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:14:30.685 00:14:30.685 --- 10.0.0.3 ping statistics --- 00:14:30.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.685 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:30.685 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:30.685 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:14:30.685 00:14:30.685 --- 10.0.0.4 ping statistics --- 00:14:30.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.685 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:30.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:30.685 00:14:30.685 --- 10.0.0.1 ping statistics --- 00:14:30.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.685 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:30.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:14:30.685 00:14:30.685 --- 10.0.0.2 ping statistics --- 00:14:30.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.685 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73304 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73304 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 73304 ']' 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:30.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:30.685 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:30.685 [2024-11-12 10:36:19.345499] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:30.685 [2024-11-12 10:36:19.345632] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.944 [2024-11-12 10:36:19.493511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.944 [2024-11-12 10:36:19.521555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.944 [2024-11-12 10:36:19.521630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.944 [2024-11-12 10:36:19.521639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.944 [2024-11-12 10:36:19.521646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.944 [2024-11-12 10:36:19.521653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.944 [2024-11-12 10:36:19.521908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.944 [2024-11-12 10:36:19.550232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73327 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=343c199a-b363-4410-9c98-c7ea4d90997f 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=0e0eadb3-97b2-44d2-b171-c1b51ad46af2 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:14:30.944 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=1aa07748-00f8-40da-b87f-15a6a7e48fa0 00:14:30.945 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:14:30.945 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.945 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:30.945 null0 00:14:30.945 null1 00:14:31.203 null2 00:14:31.203 [2024-11-12 10:36:19.711698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.203 [2024-11-12 10:36:19.728686] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:31.203 [2024-11-12 10:36:19.728780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73327 ] 00:14:31.203 [2024-11-12 10:36:19.735840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:31.203 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.203 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73327 /var/tmp/tgt2.sock 00:14:31.203 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 73327 ']' 00:14:31.203 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:14:31.203 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:31.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:14:31.203 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:14:31.203 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:31.203 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:31.203 [2024-11-12 10:36:19.879066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.203 [2024-11-12 10:36:19.918357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.462 [2024-11-12 10:36:19.966194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:31.462 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:31.462 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:14:31.462 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:14:32.030 [2024-11-12 10:36:20.547830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.030 [2024-11-12 10:36:20.563906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:14:32.030 nvme0n1 nvme0n2 00:14:32.030 nvme1n1 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:14:32.030 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 343c199a-b363-4410-9c98-c7ea4d90997f 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=343c199ab36344109c98c7ea4d90997f 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 343C199AB36344109C98C7EA4D90997F 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 343C199AB36344109C98C7EA4D90997F == \3\4\3\C\1\9\9\A\B\3\6\3\4\4\1\0\9\C\9\8\C\7\E\A\4\D\9\0\9\9\7\F ]] 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 0e0eadb3-97b2-44d2-b171-c1b51ad46af2 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0e0eadb397b244d2b171c1b51ad46af2 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0E0EADB397B244D2B171C1B51AD46AF2 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 0E0EADB397B244D2B171C1B51AD46AF2 == \0\E\0\E\A\D\B\3\9\7\B\2\4\4\D\2\B\1\7\1\C\1\B\5\1\A\D\4\6\A\F\2 ]] 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 1aa07748-00f8-40da-b87f-15a6a7e48fa0 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:14:33.407 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:33.407 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1aa0774800f840dab87f15a6a7e48fa0 00:14:33.407 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1AA0774800F840DAB87F15A6A7E48FA0 00:14:33.407 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 1AA0774800F840DAB87F15A6A7E48FA0 == \1\A\A\0\7\7\4\8\0\0\F\8\4\0\D\A\B\8\7\F\1\5\A\6\A\7\E\4\8\F\A\0 ]] 00:14:33.407 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73327 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 73327 ']' 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 73327 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73327 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:33.666 killing process with pid 73327 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73327' 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 73327 00:14:33.666 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 73327 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:33.926 rmmod nvme_tcp 00:14:33.926 rmmod nvme_fabrics 00:14:33.926 rmmod nvme_keyring 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73304 ']' 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73304 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 73304 ']' 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 73304 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73304 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:33.926 killing process with pid 73304 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73304' 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 73304 00:14:33.926 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 73304 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.185 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.444 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:14:34.444 00:14:34.444 real 0m4.314s 00:14:34.444 user 0m6.586s 00:14:34.444 sys 0m1.489s 00:14:34.444 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:34.444 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:34.444 ************************************ 00:14:34.444 END TEST nvmf_nsid 00:14:34.444 ************************************ 00:14:34.444 10:36:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:34.444 00:14:34.444 real 4m55.900s 00:14:34.444 user 10m21.594s 00:14:34.444 sys 1m7.487s 00:14:34.444 10:36:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:34.444 10:36:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:34.444 ************************************ 00:14:34.444 END TEST nvmf_target_extra 00:14:34.444 ************************************ 00:14:34.444 10:36:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:34.444 10:36:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:34.444 10:36:23 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:34.444 10:36:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:34.444 ************************************ 00:14:34.444 START TEST nvmf_host 00:14:34.444 ************************************ 00:14:34.444 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:34.444 * Looking for test storage... 00:14:34.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:34.444 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:34.444 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:14:34.444 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:34.703 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:34.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.704 --rc genhtml_branch_coverage=1 00:14:34.704 --rc genhtml_function_coverage=1 00:14:34.704 --rc genhtml_legend=1 00:14:34.704 --rc geninfo_all_blocks=1 00:14:34.704 --rc geninfo_unexecuted_blocks=1 00:14:34.704 00:14:34.704 ' 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:34.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.704 --rc genhtml_branch_coverage=1 00:14:34.704 --rc genhtml_function_coverage=1 00:14:34.704 --rc genhtml_legend=1 00:14:34.704 --rc geninfo_all_blocks=1 00:14:34.704 --rc geninfo_unexecuted_blocks=1 00:14:34.704 00:14:34.704 ' 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:34.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.704 --rc genhtml_branch_coverage=1 00:14:34.704 --rc genhtml_function_coverage=1 00:14:34.704 --rc genhtml_legend=1 00:14:34.704 --rc geninfo_all_blocks=1 00:14:34.704 --rc geninfo_unexecuted_blocks=1 00:14:34.704 00:14:34.704 ' 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:34.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.704 --rc genhtml_branch_coverage=1 00:14:34.704 --rc genhtml_function_coverage=1 00:14:34.704 --rc genhtml_legend=1 00:14:34.704 --rc geninfo_all_blocks=1 00:14:34.704 --rc geninfo_unexecuted_blocks=1 00:14:34.704 00:14:34.704 ' 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.704 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:34.704 ************************************ 00:14:34.704 START TEST nvmf_identify 00:14:34.704 ************************************ 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:34.704 * Looking for test storage... 00:14:34.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:14:34.704 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:34.963 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.964 --rc genhtml_branch_coverage=1 00:14:34.964 --rc genhtml_function_coverage=1 00:14:34.964 --rc genhtml_legend=1 00:14:34.964 --rc geninfo_all_blocks=1 00:14:34.964 --rc geninfo_unexecuted_blocks=1 00:14:34.964 00:14:34.964 ' 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.964 --rc genhtml_branch_coverage=1 00:14:34.964 --rc genhtml_function_coverage=1 00:14:34.964 --rc genhtml_legend=1 00:14:34.964 --rc geninfo_all_blocks=1 00:14:34.964 --rc geninfo_unexecuted_blocks=1 00:14:34.964 00:14:34.964 ' 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.964 --rc genhtml_branch_coverage=1 00:14:34.964 --rc genhtml_function_coverage=1 00:14:34.964 --rc genhtml_legend=1 00:14:34.964 --rc geninfo_all_blocks=1 00:14:34.964 --rc geninfo_unexecuted_blocks=1 00:14:34.964 00:14:34.964 ' 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.964 --rc genhtml_branch_coverage=1 00:14:34.964 --rc genhtml_function_coverage=1 00:14:34.964 --rc genhtml_legend=1 00:14:34.964 --rc geninfo_all_blocks=1 00:14:34.964 --rc geninfo_unexecuted_blocks=1 00:14:34.964 00:14:34.964 ' 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.964 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:34.964 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:34.965 Cannot find device "nvmf_init_br" 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:34.965 Cannot find device "nvmf_init_br2" 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:34.965 Cannot find device "nvmf_tgt_br" 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.965 Cannot find device "nvmf_tgt_br2" 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:34.965 Cannot find device "nvmf_init_br" 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:34.965 Cannot find device "nvmf_init_br2" 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:34.965 Cannot find device "nvmf_tgt_br" 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:34.965 Cannot find device "nvmf_tgt_br2" 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:34.965 Cannot find device "nvmf_br" 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:34.965 Cannot find device "nvmf_init_if" 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:34.965 Cannot find device "nvmf_init_if2" 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:34.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:34.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:34.965 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:35.224 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:35.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:35.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:14:35.225 00:14:35.225 --- 10.0.0.3 ping statistics --- 00:14:35.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.225 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:35.225 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:35.225 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:14:35.225 00:14:35.225 --- 10.0.0.4 ping statistics --- 00:14:35.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.225 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:35.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:35.225 00:14:35.225 --- 10.0.0.1 ping statistics --- 00:14:35.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.225 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:35.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:35.225 00:14:35.225 --- 10.0.0.2 ping statistics --- 00:14:35.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.225 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73674 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73674 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 73674 ']' 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:35.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:35.225 10:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:35.484 [2024-11-12 10:36:24.016525] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:35.484 [2024-11-12 10:36:24.016630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.484 [2024-11-12 10:36:24.166566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.484 [2024-11-12 10:36:24.209134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.484 [2024-11-12 10:36:24.209213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.484 [2024-11-12 10:36:24.209229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.484 [2024-11-12 10:36:24.209239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.484 [2024-11-12 10:36:24.209248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.484 [2024-11-12 10:36:24.210201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.484 [2024-11-12 10:36:24.210312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.484 [2024-11-12 10:36:24.210711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.484 [2024-11-12 10:36:24.210763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.743 [2024-11-12 10:36:24.247563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:35.743 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:35.743 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:14:35.743 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:35.743 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.743 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:35.743 [2024-11-12 10:36:24.308354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.743 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.743 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:35.744 Malloc0 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:35.744 [2024-11-12 10:36:24.403233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:35.744 [ 00:14:35.744 { 00:14:35.744 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:35.744 "subtype": "Discovery", 00:14:35.744 "listen_addresses": [ 00:14:35.744 { 00:14:35.744 "trtype": "TCP", 00:14:35.744 "adrfam": "IPv4", 00:14:35.744 "traddr": "10.0.0.3", 00:14:35.744 "trsvcid": "4420" 00:14:35.744 } 00:14:35.744 ], 00:14:35.744 "allow_any_host": true, 00:14:35.744 "hosts": [] 00:14:35.744 }, 00:14:35.744 { 00:14:35.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.744 "subtype": "NVMe", 00:14:35.744 "listen_addresses": [ 00:14:35.744 { 00:14:35.744 "trtype": "TCP", 00:14:35.744 "adrfam": "IPv4", 00:14:35.744 "traddr": "10.0.0.3", 00:14:35.744 "trsvcid": "4420" 00:14:35.744 } 00:14:35.744 ], 00:14:35.744 "allow_any_host": true, 00:14:35.744 "hosts": [], 00:14:35.744 "serial_number": "SPDK00000000000001", 00:14:35.744 "model_number": "SPDK bdev Controller", 00:14:35.744 "max_namespaces": 32, 00:14:35.744 "min_cntlid": 1, 00:14:35.744 "max_cntlid": 65519, 00:14:35.744 "namespaces": [ 00:14:35.744 { 00:14:35.744 "nsid": 1, 00:14:35.744 "bdev_name": "Malloc0", 00:14:35.744 "name": "Malloc0", 00:14:35.744 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:35.744 "eui64": "ABCDEF0123456789", 00:14:35.744 "uuid": "e3c26b45-6436-44d7-8dc0-ed740d0f3f34" 00:14:35.744 } 00:14:35.744 ] 00:14:35.744 } 00:14:35.744 ] 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.744 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:35.744 [2024-11-12 10:36:24.459397] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:35.744 [2024-11-12 10:36:24.459473] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73707 ] 00:14:36.006 [2024-11-12 10:36:24.614457] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:14:36.006 [2024-11-12 10:36:24.614535] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:36.006 [2024-11-12 10:36:24.614542] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:36.006 [2024-11-12 10:36:24.614554] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:36.006 [2024-11-12 10:36:24.614563] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:36.006 [2024-11-12 10:36:24.614935] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:14:36.006 [2024-11-12 10:36:24.615005] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x128e750 0 00:14:36.006 [2024-11-12 10:36:24.620284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:36.006 [2024-11-12 10:36:24.620328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:36.006 [2024-11-12 10:36:24.620335] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:36.006 [2024-11-12 10:36:24.620338] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:36.006 [2024-11-12 10:36:24.620366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.620373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.620377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x128e750) 00:14:36.006 [2024-11-12 10:36:24.620391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:36.006 [2024-11-12 10:36:24.620422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2740, cid 0, qid 0 00:14:36.006 [2024-11-12 10:36:24.625147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.006 [2024-11-12 10:36:24.625172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.006 [2024-11-12 10:36:24.625207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2740) on tqpair=0x128e750 00:14:36.006 [2024-11-12 10:36:24.625227] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:36.006 [2024-11-12 10:36:24.625235] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:14:36.006 [2024-11-12 10:36:24.625240] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:14:36.006 [2024-11-12 10:36:24.625256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x128e750) 00:14:36.006 [2024-11-12 10:36:24.625275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.006 [2024-11-12 10:36:24.625303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2740, cid 0, qid 0 00:14:36.006 [2024-11-12 10:36:24.625352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.006 [2024-11-12 10:36:24.625358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.006 [2024-11-12 10:36:24.625362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2740) on tqpair=0x128e750 00:14:36.006 [2024-11-12 10:36:24.625371] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:14:36.006 [2024-11-12 10:36:24.625378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:14:36.006 [2024-11-12 10:36:24.625386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x128e750) 00:14:36.006 [2024-11-12 10:36:24.625416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.006 [2024-11-12 10:36:24.625452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2740, cid 0, qid 0 00:14:36.006 [2024-11-12 10:36:24.625491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.006 [2024-11-12 10:36:24.625498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.006 [2024-11-12 10:36:24.625501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2740) on tqpair=0x128e750 00:14:36.006 [2024-11-12 10:36:24.625511] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:14:36.006 [2024-11-12 10:36:24.625519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:36.006 [2024-11-12 10:36:24.625526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x128e750) 00:14:36.006 [2024-11-12 10:36:24.625542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.006 [2024-11-12 10:36:24.625560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2740, cid 0, qid 0 00:14:36.006 [2024-11-12 10:36:24.625610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.006 [2024-11-12 10:36:24.625617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.006 [2024-11-12 10:36:24.625620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2740) on tqpair=0x128e750 00:14:36.006 [2024-11-12 10:36:24.625630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:36.006 [2024-11-12 10:36:24.625640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x128e750) 00:14:36.006 [2024-11-12 10:36:24.625655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.006 [2024-11-12 10:36:24.625673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2740, cid 0, qid 0 00:14:36.006 [2024-11-12 10:36:24.625721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.006 [2024-11-12 10:36:24.625727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.006 [2024-11-12 10:36:24.625731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.006 [2024-11-12 10:36:24.625735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2740) on tqpair=0x128e750 00:14:36.007 [2024-11-12 10:36:24.625740] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:36.007 [2024-11-12 10:36:24.625745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:36.007 [2024-11-12 10:36:24.625753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:36.007 [2024-11-12 10:36:24.625863] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:14:36.007 [2024-11-12 10:36:24.625869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:36.007 [2024-11-12 10:36:24.625878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.625882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.625886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x128e750) 00:14:36.007 [2024-11-12 10:36:24.625893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.007 [2024-11-12 10:36:24.625913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2740, cid 0, qid 0 00:14:36.007 [2024-11-12 10:36:24.625964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.007 [2024-11-12 10:36:24.625971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.007 [2024-11-12 10:36:24.625974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.625978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2740) on tqpair=0x128e750 00:14:36.007 [2024-11-12 10:36:24.625983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:36.007 [2024-11-12 10:36:24.625993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.625998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x128e750) 00:14:36.007 [2024-11-12 10:36:24.626009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.007 [2024-11-12 10:36:24.626027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2740, cid 0, qid 0 00:14:36.007 [2024-11-12 10:36:24.626071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.007 [2024-11-12 10:36:24.626077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.007 [2024-11-12 10:36:24.626081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2740) on tqpair=0x128e750 00:14:36.007 [2024-11-12 10:36:24.626090] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:36.007 [2024-11-12 10:36:24.626096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:36.007 [2024-11-12 10:36:24.626103] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:14:36.007 [2024-11-12 10:36:24.626119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:36.007 [2024-11-12 10:36:24.626129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x128e750) 00:14:36.007 [2024-11-12 10:36:24.626141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.007 [2024-11-12 10:36:24.626161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2740, cid 0, qid 0 00:14:36.007 [2024-11-12 10:36:24.626250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.007 [2024-11-12 10:36:24.626259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.007 [2024-11-12 10:36:24.626263] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626267] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x128e750): datao=0, datal=4096, cccid=0 00:14:36.007 [2024-11-12 10:36:24.626272] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f2740) on tqpair(0x128e750): expected_datao=0, payload_size=4096 00:14:36.007 [2024-11-12 10:36:24.626277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626285] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626290] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.007 [2024-11-12 10:36:24.626304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.007 [2024-11-12 10:36:24.626308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2740) on tqpair=0x128e750 00:14:36.007 [2024-11-12 10:36:24.626320] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:14:36.007 [2024-11-12 10:36:24.626326] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:14:36.007 [2024-11-12 10:36:24.626330] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:14:36.007 [2024-11-12 10:36:24.626336] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:14:36.007 [2024-11-12 10:36:24.626340] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:14:36.007 [2024-11-12 10:36:24.626345] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:14:36.007 [2024-11-12 10:36:24.626358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:36.007 [2024-11-12 10:36:24.626369] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x128e750) 00:14:36.007 [2024-11-12 10:36:24.626385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:36.007 [2024-11-12 10:36:24.626406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2740, cid 0, qid 0 00:14:36.007 [2024-11-12 10:36:24.626459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.007 [2024-11-12 10:36:24.626466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.007 [2024-11-12 10:36:24.626470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2740) on tqpair=0x128e750 00:14:36.007 [2024-11-12 10:36:24.626482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x128e750) 00:14:36.007 [2024-11-12 10:36:24.626497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.007 [2024-11-12 10:36:24.626503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x128e750) 00:14:36.007 [2024-11-12 10:36:24.626516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.007 [2024-11-12 10:36:24.626522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x128e750) 00:14:36.007 [2024-11-12 10:36:24.626535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.007 [2024-11-12 10:36:24.626541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.007 [2024-11-12 10:36:24.626554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.007 [2024-11-12 10:36:24.626559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:36.007 [2024-11-12 10:36:24.626571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:36.007 [2024-11-12 10:36:24.626578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.007 [2024-11-12 10:36:24.626582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x128e750) 00:14:36.007 [2024-11-12 10:36:24.626589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.008 [2024-11-12 10:36:24.626610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2740, cid 0, qid 0 00:14:36.008 [2024-11-12 10:36:24.626618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f28c0, cid 1, qid 0 00:14:36.008 [2024-11-12 10:36:24.626622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2a40, cid 2, qid 0 00:14:36.008 [2024-11-12 10:36:24.626627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.008 [2024-11-12 10:36:24.626632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2d40, cid 4, qid 0 00:14:36.008 [2024-11-12 10:36:24.626707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.008 [2024-11-12 10:36:24.626714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.008 [2024-11-12 10:36:24.626717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.626721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2d40) on tqpair=0x128e750 00:14:36.008 [2024-11-12 10:36:24.626727] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:14:36.008 [2024-11-12 10:36:24.626732] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:14:36.008 [2024-11-12 10:36:24.626744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.626748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x128e750) 00:14:36.008 [2024-11-12 10:36:24.626756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.008 [2024-11-12 10:36:24.626774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2d40, cid 4, qid 0 00:14:36.008 [2024-11-12 10:36:24.626826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.008 [2024-11-12 10:36:24.626832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.008 [2024-11-12 10:36:24.626836] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.626840] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x128e750): datao=0, datal=4096, cccid=4 00:14:36.008 [2024-11-12 10:36:24.626844] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f2d40) on tqpair(0x128e750): expected_datao=0, payload_size=4096 00:14:36.008 [2024-11-12 10:36:24.626849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.626856] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.626860] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.626868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.008 [2024-11-12 10:36:24.626874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.008 [2024-11-12 10:36:24.626877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.626881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2d40) on tqpair=0x128e750 00:14:36.008 [2024-11-12 10:36:24.626894] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:14:36.008 [2024-11-12 10:36:24.626924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.626930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x128e750) 00:14:36.008 [2024-11-12 10:36:24.626937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.008 [2024-11-12 10:36:24.626945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.626949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.626953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x128e750) 00:14:36.008 [2024-11-12 10:36:24.626959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.008 [2024-11-12 10:36:24.626984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2d40, cid 4, qid 0 00:14:36.008 [2024-11-12 10:36:24.626992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2ec0, cid 5, qid 0 00:14:36.008 [2024-11-12 10:36:24.627092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.008 [2024-11-12 10:36:24.627099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.008 [2024-11-12 10:36:24.627113] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627133] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x128e750): datao=0, datal=1024, cccid=4 00:14:36.008 [2024-11-12 10:36:24.627154] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f2d40) on tqpair(0x128e750): expected_datao=0, payload_size=1024 00:14:36.008 [2024-11-12 10:36:24.627159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627166] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627170] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.008 [2024-11-12 10:36:24.627183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.008 [2024-11-12 10:36:24.627186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2ec0) on tqpair=0x128e750 00:14:36.008 [2024-11-12 10:36:24.627222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.008 [2024-11-12 10:36:24.627232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.008 [2024-11-12 10:36:24.627236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2d40) on tqpair=0x128e750 00:14:36.008 [2024-11-12 10:36:24.627254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x128e750) 00:14:36.008 [2024-11-12 10:36:24.627267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.008 [2024-11-12 10:36:24.627301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2d40, cid 4, qid 0 00:14:36.008 [2024-11-12 10:36:24.627372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.008 [2024-11-12 10:36:24.627379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.008 [2024-11-12 10:36:24.627383] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627387] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x128e750): datao=0, datal=3072, cccid=4 00:14:36.008 [2024-11-12 10:36:24.627392] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f2d40) on tqpair(0x128e750): expected_datao=0, payload_size=3072 00:14:36.008 [2024-11-12 10:36:24.627397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627404] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627408] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.008 [2024-11-12 10:36:24.627423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.008 [2024-11-12 10:36:24.627427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2d40) on tqpair=0x128e750 00:14:36.008 [2024-11-12 10:36:24.627442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x128e750) 00:14:36.008 [2024-11-12 10:36:24.627454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.008 [2024-11-12 10:36:24.627478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2d40, cid 4, qid 0 00:14:36.008 [2024-11-12 10:36:24.627566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.008 [2024-11-12 10:36:24.627573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.008 [2024-11-12 10:36:24.627577] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627580] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x128e750): datao=0, datal=8, cccid=4 00:14:36.008 [2024-11-12 10:36:24.627585] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f2d40) on tqpair(0x128e750): expected_datao=0, payload_size=8 00:14:36.008 [2024-11-12 10:36:24.627589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627596] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627600] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.008 [2024-11-12 10:36:24.627621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.008 [2024-11-12 10:36:24.627625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.008 [2024-11-12 10:36:24.627629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2d40) on tqpair=0x128e750 00:14:36.008 ===================================================== 00:14:36.008 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:36.008 ===================================================== 00:14:36.008 Controller Capabilities/Features 00:14:36.008 ================================ 00:14:36.008 Vendor ID: 0000 00:14:36.009 Subsystem Vendor ID: 0000 00:14:36.009 Serial Number: .................... 00:14:36.009 Model Number: ........................................ 00:14:36.009 Firmware Version: 25.01 00:14:36.009 Recommended Arb Burst: 0 00:14:36.009 IEEE OUI Identifier: 00 00 00 00:14:36.009 Multi-path I/O 00:14:36.009 May have multiple subsystem ports: No 00:14:36.009 May have multiple controllers: No 00:14:36.009 Associated with SR-IOV VF: No 00:14:36.009 Max Data Transfer Size: 131072 00:14:36.009 Max Number of Namespaces: 0 00:14:36.009 Max Number of I/O Queues: 1024 00:14:36.009 NVMe Specification Version (VS): 1.3 00:14:36.009 NVMe Specification Version (Identify): 1.3 00:14:36.009 Maximum Queue Entries: 128 00:14:36.009 Contiguous Queues Required: Yes 00:14:36.009 Arbitration Mechanisms Supported 00:14:36.009 Weighted Round Robin: Not Supported 00:14:36.009 Vendor Specific: Not Supported 00:14:36.009 Reset Timeout: 15000 ms 00:14:36.009 Doorbell Stride: 4 bytes 00:14:36.009 NVM Subsystem Reset: Not Supported 00:14:36.009 Command Sets Supported 00:14:36.009 NVM Command Set: Supported 00:14:36.009 Boot Partition: Not Supported 00:14:36.009 Memory Page Size Minimum: 4096 bytes 00:14:36.009 Memory Page Size Maximum: 4096 bytes 00:14:36.009 Persistent Memory Region: Not Supported 00:14:36.009 Optional Asynchronous Events Supported 00:14:36.009 Namespace Attribute Notices: Not Supported 00:14:36.009 Firmware Activation Notices: Not Supported 00:14:36.009 ANA Change Notices: Not Supported 00:14:36.009 PLE Aggregate Log Change Notices: Not Supported 00:14:36.009 LBA Status Info Alert Notices: Not Supported 00:14:36.009 EGE Aggregate Log Change Notices: Not Supported 00:14:36.009 Normal NVM Subsystem Shutdown event: Not Supported 00:14:36.009 Zone Descriptor Change Notices: Not Supported 00:14:36.009 Discovery Log Change Notices: Supported 00:14:36.009 Controller Attributes 00:14:36.009 128-bit Host Identifier: Not Supported 00:14:36.009 Non-Operational Permissive Mode: Not Supported 00:14:36.009 NVM Sets: Not Supported 00:14:36.009 Read Recovery Levels: Not Supported 00:14:36.009 Endurance Groups: Not Supported 00:14:36.009 Predictable Latency Mode: Not Supported 00:14:36.009 Traffic Based Keep ALive: Not Supported 00:14:36.009 Namespace Granularity: Not Supported 00:14:36.009 SQ Associations: Not Supported 00:14:36.009 UUID List: Not Supported 00:14:36.009 Multi-Domain Subsystem: Not Supported 00:14:36.009 Fixed Capacity Management: Not Supported 00:14:36.009 Variable Capacity Management: Not Supported 00:14:36.009 Delete Endurance Group: Not Supported 00:14:36.009 Delete NVM Set: Not Supported 00:14:36.009 Extended LBA Formats Supported: Not Supported 00:14:36.009 Flexible Data Placement Supported: Not Supported 00:14:36.009 00:14:36.009 Controller Memory Buffer Support 00:14:36.009 ================================ 00:14:36.009 Supported: No 00:14:36.009 00:14:36.009 Persistent Memory Region Support 00:14:36.009 ================================ 00:14:36.009 Supported: No 00:14:36.009 00:14:36.009 Admin Command Set Attributes 00:14:36.009 ============================ 00:14:36.009 Security Send/Receive: Not Supported 00:14:36.009 Format NVM: Not Supported 00:14:36.009 Firmware Activate/Download: Not Supported 00:14:36.009 Namespace Management: Not Supported 00:14:36.009 Device Self-Test: Not Supported 00:14:36.009 Directives: Not Supported 00:14:36.009 NVMe-MI: Not Supported 00:14:36.009 Virtualization Management: Not Supported 00:14:36.009 Doorbell Buffer Config: Not Supported 00:14:36.009 Get LBA Status Capability: Not Supported 00:14:36.009 Command & Feature Lockdown Capability: Not Supported 00:14:36.009 Abort Command Limit: 1 00:14:36.009 Async Event Request Limit: 4 00:14:36.009 Number of Firmware Slots: N/A 00:14:36.009 Firmware Slot 1 Read-Only: N/A 00:14:36.009 Firmware Activation Without Reset: N/A 00:14:36.009 Multiple Update Detection Support: N/A 00:14:36.009 Firmware Update Granularity: No Information Provided 00:14:36.009 Per-Namespace SMART Log: No 00:14:36.009 Asymmetric Namespace Access Log Page: Not Supported 00:14:36.009 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:36.009 Command Effects Log Page: Not Supported 00:14:36.009 Get Log Page Extended Data: Supported 00:14:36.009 Telemetry Log Pages: Not Supported 00:14:36.009 Persistent Event Log Pages: Not Supported 00:14:36.009 Supported Log Pages Log Page: May Support 00:14:36.009 Commands Supported & Effects Log Page: Not Supported 00:14:36.009 Feature Identifiers & Effects Log Page:May Support 00:14:36.009 NVMe-MI Commands & Effects Log Page: May Support 00:14:36.009 Data Area 4 for Telemetry Log: Not Supported 00:14:36.009 Error Log Page Entries Supported: 128 00:14:36.009 Keep Alive: Not Supported 00:14:36.009 00:14:36.009 NVM Command Set Attributes 00:14:36.009 ========================== 00:14:36.009 Submission Queue Entry Size 00:14:36.009 Max: 1 00:14:36.009 Min: 1 00:14:36.009 Completion Queue Entry Size 00:14:36.009 Max: 1 00:14:36.009 Min: 1 00:14:36.009 Number of Namespaces: 0 00:14:36.009 Compare Command: Not Supported 00:14:36.009 Write Uncorrectable Command: Not Supported 00:14:36.009 Dataset Management Command: Not Supported 00:14:36.009 Write Zeroes Command: Not Supported 00:14:36.009 Set Features Save Field: Not Supported 00:14:36.009 Reservations: Not Supported 00:14:36.009 Timestamp: Not Supported 00:14:36.009 Copy: Not Supported 00:14:36.009 Volatile Write Cache: Not Present 00:14:36.009 Atomic Write Unit (Normal): 1 00:14:36.009 Atomic Write Unit (PFail): 1 00:14:36.009 Atomic Compare & Write Unit: 1 00:14:36.009 Fused Compare & Write: Supported 00:14:36.009 Scatter-Gather List 00:14:36.009 SGL Command Set: Supported 00:14:36.009 SGL Keyed: Supported 00:14:36.009 SGL Bit Bucket Descriptor: Not Supported 00:14:36.009 SGL Metadata Pointer: Not Supported 00:14:36.009 Oversized SGL: Not Supported 00:14:36.009 SGL Metadata Address: Not Supported 00:14:36.009 SGL Offset: Supported 00:14:36.009 Transport SGL Data Block: Not Supported 00:14:36.009 Replay Protected Memory Block: Not Supported 00:14:36.009 00:14:36.009 Firmware Slot Information 00:14:36.009 ========================= 00:14:36.009 Active slot: 0 00:14:36.009 00:14:36.009 00:14:36.009 Error Log 00:14:36.009 ========= 00:14:36.009 00:14:36.009 Active Namespaces 00:14:36.009 ================= 00:14:36.009 Discovery Log Page 00:14:36.009 ================== 00:14:36.009 Generation Counter: 2 00:14:36.009 Number of Records: 2 00:14:36.009 Record Format: 0 00:14:36.009 00:14:36.009 Discovery Log Entry 0 00:14:36.009 ---------------------- 00:14:36.009 Transport Type: 3 (TCP) 00:14:36.009 Address Family: 1 (IPv4) 00:14:36.009 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:36.009 Entry Flags: 00:14:36.009 Duplicate Returned Information: 1 00:14:36.009 Explicit Persistent Connection Support for Discovery: 1 00:14:36.009 Transport Requirements: 00:14:36.009 Secure Channel: Not Required 00:14:36.009 Port ID: 0 (0x0000) 00:14:36.009 Controller ID: 65535 (0xffff) 00:14:36.009 Admin Max SQ Size: 128 00:14:36.009 Transport Service Identifier: 4420 00:14:36.009 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:36.010 Transport Address: 10.0.0.3 00:14:36.010 Discovery Log Entry 1 00:14:36.010 ---------------------- 00:14:36.010 Transport Type: 3 (TCP) 00:14:36.010 Address Family: 1 (IPv4) 00:14:36.010 Subsystem Type: 2 (NVM Subsystem) 00:14:36.010 Entry Flags: 00:14:36.010 Duplicate Returned Information: 0 00:14:36.010 Explicit Persistent Connection Support for Discovery: 0 00:14:36.010 Transport Requirements: 00:14:36.010 Secure Channel: Not Required 00:14:36.010 Port ID: 0 (0x0000) 00:14:36.010 Controller ID: 65535 (0xffff) 00:14:36.010 Admin Max SQ Size: 128 00:14:36.010 Transport Service Identifier: 4420 00:14:36.010 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:36.010 Transport Address: 10.0.0.3 [2024-11-12 10:36:24.627751] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:14:36.010 [2024-11-12 10:36:24.627767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2740) on tqpair=0x128e750 00:14:36.010 [2024-11-12 10:36:24.627775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.010 [2024-11-12 10:36:24.627780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f28c0) on tqpair=0x128e750 00:14:36.010 [2024-11-12 10:36:24.627785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.010 [2024-11-12 10:36:24.627790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2a40) on tqpair=0x128e750 00:14:36.010 [2024-11-12 10:36:24.627795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.010 [2024-11-12 10:36:24.627800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.010 [2024-11-12 10:36:24.627804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.010 [2024-11-12 10:36:24.627814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.627818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.627822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.010 [2024-11-12 10:36:24.627830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.010 [2024-11-12 10:36:24.627856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.010 [2024-11-12 10:36:24.627913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.010 [2024-11-12 10:36:24.627921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.010 [2024-11-12 10:36:24.627924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.627928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.010 [2024-11-12 10:36:24.627936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.627940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.627944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.010 [2024-11-12 10:36:24.627951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.010 [2024-11-12 10:36:24.627973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.010 [2024-11-12 10:36:24.628033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.010 [2024-11-12 10:36:24.628040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.010 [2024-11-12 10:36:24.628043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.010 [2024-11-12 10:36:24.628052] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:14:36.010 [2024-11-12 10:36:24.628057] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:14:36.010 [2024-11-12 10:36:24.628067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.010 [2024-11-12 10:36:24.628083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.010 [2024-11-12 10:36:24.628101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.010 [2024-11-12 10:36:24.628146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.010 [2024-11-12 10:36:24.628153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.010 [2024-11-12 10:36:24.628156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.010 [2024-11-12 10:36:24.628171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.010 [2024-11-12 10:36:24.628206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.010 [2024-11-12 10:36:24.628227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.010 [2024-11-12 10:36:24.628269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.010 [2024-11-12 10:36:24.628276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.010 [2024-11-12 10:36:24.628280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.010 [2024-11-12 10:36:24.628294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.010 [2024-11-12 10:36:24.628309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.010 [2024-11-12 10:36:24.628327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.010 [2024-11-12 10:36:24.628367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.010 [2024-11-12 10:36:24.628374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.010 [2024-11-12 10:36:24.628378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.010 [2024-11-12 10:36:24.628391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.010 [2024-11-12 10:36:24.628406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.010 [2024-11-12 10:36:24.628424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.010 [2024-11-12 10:36:24.628463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.010 [2024-11-12 10:36:24.628470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.010 [2024-11-12 10:36:24.628474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.010 [2024-11-12 10:36:24.628488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.010 [2024-11-12 10:36:24.628503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.010 [2024-11-12 10:36:24.628520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.010 [2024-11-12 10:36:24.628561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.010 [2024-11-12 10:36:24.628567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.010 [2024-11-12 10:36:24.628571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.010 [2024-11-12 10:36:24.628586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.010 [2024-11-12 10:36:24.628601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.010 [2024-11-12 10:36:24.628619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.010 [2024-11-12 10:36:24.628658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.010 [2024-11-12 10:36:24.628665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.010 [2024-11-12 10:36:24.628669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.010 [2024-11-12 10:36:24.628682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.010 [2024-11-12 10:36:24.628687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.628691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.011 [2024-11-12 10:36:24.628698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.011 [2024-11-12 10:36:24.628715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.011 [2024-11-12 10:36:24.628758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.011 [2024-11-12 10:36:24.628770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.011 [2024-11-12 10:36:24.628774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.628778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.011 [2024-11-12 10:36:24.628789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.628793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.628797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.011 [2024-11-12 10:36:24.628804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.011 [2024-11-12 10:36:24.628822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.011 [2024-11-12 10:36:24.628863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.011 [2024-11-12 10:36:24.628869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.011 [2024-11-12 10:36:24.628873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.628877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.011 [2024-11-12 10:36:24.628887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.628891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.628895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.011 [2024-11-12 10:36:24.628902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.011 [2024-11-12 10:36:24.628920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.011 [2024-11-12 10:36:24.628959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.011 [2024-11-12 10:36:24.628966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.011 [2024-11-12 10:36:24.628970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.628973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.011 [2024-11-12 10:36:24.628983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.628988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.628991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.011 [2024-11-12 10:36:24.628998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.011 [2024-11-12 10:36:24.629016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.011 [2024-11-12 10:36:24.629058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.011 [2024-11-12 10:36:24.629065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.011 [2024-11-12 10:36:24.629068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.629072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.011 [2024-11-12 10:36:24.629082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.629086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.629090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.011 [2024-11-12 10:36:24.629097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.011 [2024-11-12 10:36:24.629115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.011 [2024-11-12 10:36:24.629154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.011 [2024-11-12 10:36:24.629161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.011 [2024-11-12 10:36:24.629164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.629168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.011 [2024-11-12 10:36:24.636232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.636251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.636255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x128e750) 00:14:36.011 [2024-11-12 10:36:24.636280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.011 [2024-11-12 10:36:24.636307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f2bc0, cid 3, qid 0 00:14:36.011 [2024-11-12 10:36:24.636359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.011 [2024-11-12 10:36:24.636367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.011 [2024-11-12 10:36:24.636370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.011 [2024-11-12 10:36:24.636375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12f2bc0) on tqpair=0x128e750 00:14:36.011 [2024-11-12 10:36:24.636384] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 8 milliseconds 00:14:36.011 00:14:36.011 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:36.011 [2024-11-12 10:36:24.675809] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:36.011 [2024-11-12 10:36:24.675863] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73709 ] 00:14:36.274 [2024-11-12 10:36:24.837233] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:14:36.274 [2024-11-12 10:36:24.837325] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:36.274 [2024-11-12 10:36:24.837333] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:36.274 [2024-11-12 10:36:24.837349] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:36.274 [2024-11-12 10:36:24.837360] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:36.274 [2024-11-12 10:36:24.837719] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:14:36.274 [2024-11-12 10:36:24.837794] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xdcc750 0 00:14:36.274 [2024-11-12 10:36:24.852239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:36.274 [2024-11-12 10:36:24.852284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:36.274 [2024-11-12 10:36:24.852291] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:36.274 [2024-11-12 10:36:24.852295] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:36.274 [2024-11-12 10:36:24.852324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.852330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.852335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcc750) 00:14:36.274 [2024-11-12 10:36:24.852349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:36.274 [2024-11-12 10:36:24.852380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30740, cid 0, qid 0 00:14:36.274 [2024-11-12 10:36:24.860286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.274 [2024-11-12 10:36:24.860307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.274 [2024-11-12 10:36:24.860327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.860333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30740) on tqpair=0xdcc750 00:14:36.274 [2024-11-12 10:36:24.860348] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:36.274 [2024-11-12 10:36:24.860357] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:14:36.274 [2024-11-12 10:36:24.860364] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:14:36.274 [2024-11-12 10:36:24.860380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.860386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.860390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcc750) 00:14:36.274 [2024-11-12 10:36:24.860399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.274 [2024-11-12 10:36:24.860428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30740, cid 0, qid 0 00:14:36.274 [2024-11-12 10:36:24.860487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.274 [2024-11-12 10:36:24.860494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.274 [2024-11-12 10:36:24.860498] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.860503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30740) on tqpair=0xdcc750 00:14:36.274 [2024-11-12 10:36:24.860509] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:14:36.274 [2024-11-12 10:36:24.860518] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:14:36.274 [2024-11-12 10:36:24.860526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.860530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.860549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcc750) 00:14:36.274 [2024-11-12 10:36:24.860557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.274 [2024-11-12 10:36:24.860578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30740, cid 0, qid 0 00:14:36.274 [2024-11-12 10:36:24.860950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.274 [2024-11-12 10:36:24.860966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.274 [2024-11-12 10:36:24.860971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.860975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30740) on tqpair=0xdcc750 00:14:36.274 [2024-11-12 10:36:24.860982] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:14:36.274 [2024-11-12 10:36:24.860991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:36.274 [2024-11-12 10:36:24.861016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.861021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.861025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcc750) 00:14:36.274 [2024-11-12 10:36:24.861033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.274 [2024-11-12 10:36:24.861053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30740, cid 0, qid 0 00:14:36.274 [2024-11-12 10:36:24.861130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.274 [2024-11-12 10:36:24.861138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.274 [2024-11-12 10:36:24.861142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.861147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30740) on tqpair=0xdcc750 00:14:36.274 [2024-11-12 10:36:24.861153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:36.274 [2024-11-12 10:36:24.861164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.861169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.861173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcc750) 00:14:36.274 [2024-11-12 10:36:24.861181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.274 [2024-11-12 10:36:24.861201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30740, cid 0, qid 0 00:14:36.274 [2024-11-12 10:36:24.861428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.274 [2024-11-12 10:36:24.861437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.274 [2024-11-12 10:36:24.861442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.861447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30740) on tqpair=0xdcc750 00:14:36.274 [2024-11-12 10:36:24.861452] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:36.274 [2024-11-12 10:36:24.861458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:36.274 [2024-11-12 10:36:24.861467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:36.274 [2024-11-12 10:36:24.861593] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:14:36.274 [2024-11-12 10:36:24.861599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:36.274 [2024-11-12 10:36:24.861609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.861613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.274 [2024-11-12 10:36:24.861617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcc750) 00:14:36.274 [2024-11-12 10:36:24.861625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.274 [2024-11-12 10:36:24.861648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30740, cid 0, qid 0 00:14:36.274 [2024-11-12 10:36:24.862159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.274 [2024-11-12 10:36:24.862175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.275 [2024-11-12 10:36:24.862194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.862199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30740) on tqpair=0xdcc750 00:14:36.275 [2024-11-12 10:36:24.862206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:36.275 [2024-11-12 10:36:24.862217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.862223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.862227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcc750) 00:14:36.275 [2024-11-12 10:36:24.862235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.275 [2024-11-12 10:36:24.862256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30740, cid 0, qid 0 00:14:36.275 [2024-11-12 10:36:24.862311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.275 [2024-11-12 10:36:24.862318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.275 [2024-11-12 10:36:24.862322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.862326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30740) on tqpair=0xdcc750 00:14:36.275 [2024-11-12 10:36:24.862332] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:36.275 [2024-11-12 10:36:24.862337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:36.275 [2024-11-12 10:36:24.862346] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:14:36.275 [2024-11-12 10:36:24.862362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:36.275 [2024-11-12 10:36:24.862373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.862378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcc750) 00:14:36.275 [2024-11-12 10:36:24.862386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.275 [2024-11-12 10:36:24.862407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30740, cid 0, qid 0 00:14:36.275 [2024-11-12 10:36:24.863012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.275 [2024-11-12 10:36:24.863029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.275 [2024-11-12 10:36:24.863050] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863055] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcc750): datao=0, datal=4096, cccid=0 00:14:36.275 [2024-11-12 10:36:24.863060] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe30740) on tqpair(0xdcc750): expected_datao=0, payload_size=4096 00:14:36.275 [2024-11-12 10:36:24.863065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863074] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863079] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.275 [2024-11-12 10:36:24.863095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.275 [2024-11-12 10:36:24.863098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30740) on tqpair=0xdcc750 00:14:36.275 [2024-11-12 10:36:24.863140] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:14:36.275 [2024-11-12 10:36:24.863146] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:14:36.275 [2024-11-12 10:36:24.863151] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:14:36.275 [2024-11-12 10:36:24.863156] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:14:36.275 [2024-11-12 10:36:24.863162] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:14:36.275 [2024-11-12 10:36:24.863167] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:14:36.275 [2024-11-12 10:36:24.863188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:36.275 [2024-11-12 10:36:24.863212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcc750) 00:14:36.275 [2024-11-12 10:36:24.863232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:36.275 [2024-11-12 10:36:24.863256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30740, cid 0, qid 0 00:14:36.275 [2024-11-12 10:36:24.863309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.275 [2024-11-12 10:36:24.863317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.275 [2024-11-12 10:36:24.863321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30740) on tqpair=0xdcc750 00:14:36.275 [2024-11-12 10:36:24.863333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdcc750) 00:14:36.275 [2024-11-12 10:36:24.863350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.275 [2024-11-12 10:36:24.863356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xdcc750) 00:14:36.275 [2024-11-12 10:36:24.863371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.275 [2024-11-12 10:36:24.863377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xdcc750) 00:14:36.275 [2024-11-12 10:36:24.863391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.275 [2024-11-12 10:36:24.863398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcc750) 00:14:36.275 [2024-11-12 10:36:24.863412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.275 [2024-11-12 10:36:24.863418] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:36.275 [2024-11-12 10:36:24.863432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:36.275 [2024-11-12 10:36:24.863440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.863445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdcc750) 00:14:36.275 [2024-11-12 10:36:24.863452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.275 [2024-11-12 10:36:24.863476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30740, cid 0, qid 0 00:14:36.275 [2024-11-12 10:36:24.863484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe308c0, cid 1, qid 0 00:14:36.275 [2024-11-12 10:36:24.863490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30a40, cid 2, qid 0 00:14:36.275 [2024-11-12 10:36:24.863495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30bc0, cid 3, qid 0 00:14:36.275 [2024-11-12 10:36:24.863500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30d40, cid 4, qid 0 00:14:36.275 [2024-11-12 10:36:24.864101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.275 [2024-11-12 10:36:24.864118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.275 [2024-11-12 10:36:24.864123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.864128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30d40) on tqpair=0xdcc750 00:14:36.275 [2024-11-12 10:36:24.864134] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:14:36.275 [2024-11-12 10:36:24.864140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:36.275 [2024-11-12 10:36:24.864150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:14:36.275 [2024-11-12 10:36:24.864162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:36.275 [2024-11-12 10:36:24.864170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.864175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.868272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdcc750) 00:14:36.275 [2024-11-12 10:36:24.868301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:36.275 [2024-11-12 10:36:24.868329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30d40, cid 4, qid 0 00:14:36.275 [2024-11-12 10:36:24.868390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.275 [2024-11-12 10:36:24.868398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.275 [2024-11-12 10:36:24.868401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.868406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30d40) on tqpair=0xdcc750 00:14:36.275 [2024-11-12 10:36:24.868491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:14:36.275 [2024-11-12 10:36:24.868505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:36.275 [2024-11-12 10:36:24.868529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.275 [2024-11-12 10:36:24.868534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdcc750) 00:14:36.275 [2024-11-12 10:36:24.868542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.275 [2024-11-12 10:36:24.868564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30d40, cid 4, qid 0 00:14:36.275 [2024-11-12 10:36:24.868934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.276 [2024-11-12 10:36:24.868951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.276 [2024-11-12 10:36:24.868956] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.868960] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcc750): datao=0, datal=4096, cccid=4 00:14:36.276 [2024-11-12 10:36:24.868965] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe30d40) on tqpair(0xdcc750): expected_datao=0, payload_size=4096 00:14:36.276 [2024-11-12 10:36:24.868970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.868978] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.868982] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.868992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.276 [2024-11-12 10:36:24.868998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.276 [2024-11-12 10:36:24.869002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.869006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30d40) on tqpair=0xdcc750 00:14:36.276 [2024-11-12 10:36:24.869026] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:14:36.276 [2024-11-12 10:36:24.869038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:14:36.276 [2024-11-12 10:36:24.869060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:14:36.276 [2024-11-12 10:36:24.869068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.869072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdcc750) 00:14:36.276 [2024-11-12 10:36:24.869080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.276 [2024-11-12 10:36:24.869129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30d40, cid 4, qid 0 00:14:36.276 [2024-11-12 10:36:24.869540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.276 [2024-11-12 10:36:24.869557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.276 [2024-11-12 10:36:24.869562] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.869566] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcc750): datao=0, datal=4096, cccid=4 00:14:36.276 [2024-11-12 10:36:24.869571] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe30d40) on tqpair(0xdcc750): expected_datao=0, payload_size=4096 00:14:36.276 [2024-11-12 10:36:24.869576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.869583] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.869588] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.869597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.276 [2024-11-12 10:36:24.869603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.276 [2024-11-12 10:36:24.869607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.869611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30d40) on tqpair=0xdcc750 00:14:36.276 [2024-11-12 10:36:24.869644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:36.276 [2024-11-12 10:36:24.869655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:36.276 [2024-11-12 10:36:24.869664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.869668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdcc750) 00:14:36.276 [2024-11-12 10:36:24.869676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.276 [2024-11-12 10:36:24.869713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30d40, cid 4, qid 0 00:14:36.276 [2024-11-12 10:36:24.869967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.276 [2024-11-12 10:36:24.869983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.276 [2024-11-12 10:36:24.869988] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.869992] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcc750): datao=0, datal=4096, cccid=4 00:14:36.276 [2024-11-12 10:36:24.869997] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe30d40) on tqpair(0xdcc750): expected_datao=0, payload_size=4096 00:14:36.276 [2024-11-12 10:36:24.870002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.870010] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.870014] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.870023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.276 [2024-11-12 10:36:24.870030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.276 [2024-11-12 10:36:24.870034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.870038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30d40) on tqpair=0xdcc750 00:14:36.276 [2024-11-12 10:36:24.870049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:36.276 [2024-11-12 10:36:24.870058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:14:36.276 [2024-11-12 10:36:24.870070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:14:36.276 [2024-11-12 10:36:24.870091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:36.276 [2024-11-12 10:36:24.870097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:36.276 [2024-11-12 10:36:24.870119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:14:36.276 [2024-11-12 10:36:24.870125] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:14:36.276 [2024-11-12 10:36:24.870130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:14:36.276 [2024-11-12 10:36:24.870136] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:14:36.276 [2024-11-12 10:36:24.870154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.870159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdcc750) 00:14:36.276 [2024-11-12 10:36:24.870167] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.276 [2024-11-12 10:36:24.870175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.870179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.870183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdcc750) 00:14:36.276 [2024-11-12 10:36:24.870190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.276 [2024-11-12 10:36:24.870231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30d40, cid 4, qid 0 00:14:36.276 [2024-11-12 10:36:24.870241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30ec0, cid 5, qid 0 00:14:36.276 [2024-11-12 10:36:24.870790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.276 [2024-11-12 10:36:24.870806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.276 [2024-11-12 10:36:24.870811] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.870815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30d40) on tqpair=0xdcc750 00:14:36.276 [2024-11-12 10:36:24.870823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.276 [2024-11-12 10:36:24.870829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.276 [2024-11-12 10:36:24.870833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.870837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30ec0) on tqpair=0xdcc750 00:14:36.276 [2024-11-12 10:36:24.870849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.870853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdcc750) 00:14:36.276 [2024-11-12 10:36:24.870861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.276 [2024-11-12 10:36:24.870881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30ec0, cid 5, qid 0 00:14:36.276 [2024-11-12 10:36:24.871022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.276 [2024-11-12 10:36:24.871030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.276 [2024-11-12 10:36:24.871034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.871038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30ec0) on tqpair=0xdcc750 00:14:36.276 [2024-11-12 10:36:24.871049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.871053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdcc750) 00:14:36.276 [2024-11-12 10:36:24.871060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.276 [2024-11-12 10:36:24.871079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30ec0, cid 5, qid 0 00:14:36.276 [2024-11-12 10:36:24.871436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.276 [2024-11-12 10:36:24.871457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.276 [2024-11-12 10:36:24.871462] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.871466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30ec0) on tqpair=0xdcc750 00:14:36.276 [2024-11-12 10:36:24.871479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.871484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdcc750) 00:14:36.276 [2024-11-12 10:36:24.871493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.276 [2024-11-12 10:36:24.871516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30ec0, cid 5, qid 0 00:14:36.276 [2024-11-12 10:36:24.871593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.276 [2024-11-12 10:36:24.871600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.276 [2024-11-12 10:36:24.871620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.871624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30ec0) on tqpair=0xdcc750 00:14:36.276 [2024-11-12 10:36:24.871645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.276 [2024-11-12 10:36:24.871651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdcc750) 00:14:36.277 [2024-11-12 10:36:24.871659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.277 [2024-11-12 10:36:24.871667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.871671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdcc750) 00:14:36.277 [2024-11-12 10:36:24.871678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.277 [2024-11-12 10:36:24.871685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.871689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xdcc750) 00:14:36.277 [2024-11-12 10:36:24.871696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.277 [2024-11-12 10:36:24.871704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.871708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdcc750) 00:14:36.277 [2024-11-12 10:36:24.871714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.277 [2024-11-12 10:36:24.871736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30ec0, cid 5, qid 0 00:14:36.277 [2024-11-12 10:36:24.871744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30d40, cid 4, qid 0 00:14:36.277 [2024-11-12 10:36:24.871749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe31040, cid 6, qid 0 00:14:36.277 [2024-11-12 10:36:24.871754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe311c0, cid 7, qid 0 00:14:36.277 [2024-11-12 10:36:24.876256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.277 [2024-11-12 10:36:24.876277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.277 [2024-11-12 10:36:24.876298] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876302] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcc750): datao=0, datal=8192, cccid=5 00:14:36.277 [2024-11-12 10:36:24.876307] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe30ec0) on tqpair(0xdcc750): expected_datao=0, payload_size=8192 00:14:36.277 [2024-11-12 10:36:24.876312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876326] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876331] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.277 [2024-11-12 10:36:24.876343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.277 [2024-11-12 10:36:24.876347] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876351] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcc750): datao=0, datal=512, cccid=4 00:14:36.277 [2024-11-12 10:36:24.876356] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe30d40) on tqpair(0xdcc750): expected_datao=0, payload_size=512 00:14:36.277 [2024-11-12 10:36:24.876360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876367] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876371] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.277 [2024-11-12 10:36:24.876383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.277 [2024-11-12 10:36:24.876386] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876390] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcc750): datao=0, datal=512, cccid=6 00:14:36.277 [2024-11-12 10:36:24.876395] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe31040) on tqpair(0xdcc750): expected_datao=0, payload_size=512 00:14:36.277 [2024-11-12 10:36:24.876399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876406] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876410] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:36.277 [2024-11-12 10:36:24.876422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:36.277 [2024-11-12 10:36:24.876425] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876429] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdcc750): datao=0, datal=4096, cccid=7 00:14:36.277 [2024-11-12 10:36:24.876434] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe311c0) on tqpair(0xdcc750): expected_datao=0, payload_size=4096 00:14:36.277 [2024-11-12 10:36:24.876438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876445] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876449] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.277 [2024-11-12 10:36:24.876477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.277 [2024-11-12 10:36:24.876481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30ec0) on tqpair=0xdcc750 00:14:36.277 [2024-11-12 10:36:24.876504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.277 [2024-11-12 10:36:24.876511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.277 [2024-11-12 10:36:24.876515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30d40) on tqpair=0xdcc750 00:14:36.277 [2024-11-12 10:36:24.876532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.277 [2024-11-12 10:36:24.876539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.277 [2024-11-12 10:36:24.876542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe31040) on tqpair=0xdcc750 00:14:36.277 [2024-11-12 10:36:24.876554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.277 [2024-11-12 10:36:24.876561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.277 [2024-11-12 10:36:24.876565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.277 [2024-11-12 10:36:24.876569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe311c0) on tqpair=0xdcc750 00:14:36.277 ===================================================== 00:14:36.277 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:36.277 ===================================================== 00:14:36.277 Controller Capabilities/Features 00:14:36.277 ================================ 00:14:36.277 Vendor ID: 8086 00:14:36.277 Subsystem Vendor ID: 8086 00:14:36.277 Serial Number: SPDK00000000000001 00:14:36.277 Model Number: SPDK bdev Controller 00:14:36.277 Firmware Version: 25.01 00:14:36.277 Recommended Arb Burst: 6 00:14:36.277 IEEE OUI Identifier: e4 d2 5c 00:14:36.277 Multi-path I/O 00:14:36.277 May have multiple subsystem ports: Yes 00:14:36.277 May have multiple controllers: Yes 00:14:36.277 Associated with SR-IOV VF: No 00:14:36.277 Max Data Transfer Size: 131072 00:14:36.277 Max Number of Namespaces: 32 00:14:36.277 Max Number of I/O Queues: 127 00:14:36.277 NVMe Specification Version (VS): 1.3 00:14:36.277 NVMe Specification Version (Identify): 1.3 00:14:36.277 Maximum Queue Entries: 128 00:14:36.277 Contiguous Queues Required: Yes 00:14:36.277 Arbitration Mechanisms Supported 00:14:36.277 Weighted Round Robin: Not Supported 00:14:36.277 Vendor Specific: Not Supported 00:14:36.277 Reset Timeout: 15000 ms 00:14:36.277 Doorbell Stride: 4 bytes 00:14:36.277 NVM Subsystem Reset: Not Supported 00:14:36.277 Command Sets Supported 00:14:36.277 NVM Command Set: Supported 00:14:36.277 Boot Partition: Not Supported 00:14:36.277 Memory Page Size Minimum: 4096 bytes 00:14:36.277 Memory Page Size Maximum: 4096 bytes 00:14:36.277 Persistent Memory Region: Not Supported 00:14:36.277 Optional Asynchronous Events Supported 00:14:36.277 Namespace Attribute Notices: Supported 00:14:36.277 Firmware Activation Notices: Not Supported 00:14:36.277 ANA Change Notices: Not Supported 00:14:36.277 PLE Aggregate Log Change Notices: Not Supported 00:14:36.277 LBA Status Info Alert Notices: Not Supported 00:14:36.277 EGE Aggregate Log Change Notices: Not Supported 00:14:36.277 Normal NVM Subsystem Shutdown event: Not Supported 00:14:36.277 Zone Descriptor Change Notices: Not Supported 00:14:36.277 Discovery Log Change Notices: Not Supported 00:14:36.277 Controller Attributes 00:14:36.277 128-bit Host Identifier: Supported 00:14:36.277 Non-Operational Permissive Mode: Not Supported 00:14:36.277 NVM Sets: Not Supported 00:14:36.277 Read Recovery Levels: Not Supported 00:14:36.277 Endurance Groups: Not Supported 00:14:36.277 Predictable Latency Mode: Not Supported 00:14:36.277 Traffic Based Keep ALive: Not Supported 00:14:36.277 Namespace Granularity: Not Supported 00:14:36.277 SQ Associations: Not Supported 00:14:36.277 UUID List: Not Supported 00:14:36.277 Multi-Domain Subsystem: Not Supported 00:14:36.277 Fixed Capacity Management: Not Supported 00:14:36.277 Variable Capacity Management: Not Supported 00:14:36.277 Delete Endurance Group: Not Supported 00:14:36.277 Delete NVM Set: Not Supported 00:14:36.277 Extended LBA Formats Supported: Not Supported 00:14:36.277 Flexible Data Placement Supported: Not Supported 00:14:36.277 00:14:36.277 Controller Memory Buffer Support 00:14:36.277 ================================ 00:14:36.277 Supported: No 00:14:36.277 00:14:36.277 Persistent Memory Region Support 00:14:36.277 ================================ 00:14:36.277 Supported: No 00:14:36.277 00:14:36.277 Admin Command Set Attributes 00:14:36.277 ============================ 00:14:36.277 Security Send/Receive: Not Supported 00:14:36.277 Format NVM: Not Supported 00:14:36.277 Firmware Activate/Download: Not Supported 00:14:36.277 Namespace Management: Not Supported 00:14:36.277 Device Self-Test: Not Supported 00:14:36.277 Directives: Not Supported 00:14:36.278 NVMe-MI: Not Supported 00:14:36.278 Virtualization Management: Not Supported 00:14:36.278 Doorbell Buffer Config: Not Supported 00:14:36.278 Get LBA Status Capability: Not Supported 00:14:36.278 Command & Feature Lockdown Capability: Not Supported 00:14:36.278 Abort Command Limit: 4 00:14:36.278 Async Event Request Limit: 4 00:14:36.278 Number of Firmware Slots: N/A 00:14:36.278 Firmware Slot 1 Read-Only: N/A 00:14:36.278 Firmware Activation Without Reset: N/A 00:14:36.278 Multiple Update Detection Support: N/A 00:14:36.278 Firmware Update Granularity: No Information Provided 00:14:36.278 Per-Namespace SMART Log: No 00:14:36.278 Asymmetric Namespace Access Log Page: Not Supported 00:14:36.278 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:36.278 Command Effects Log Page: Supported 00:14:36.278 Get Log Page Extended Data: Supported 00:14:36.278 Telemetry Log Pages: Not Supported 00:14:36.278 Persistent Event Log Pages: Not Supported 00:14:36.278 Supported Log Pages Log Page: May Support 00:14:36.278 Commands Supported & Effects Log Page: Not Supported 00:14:36.278 Feature Identifiers & Effects Log Page:May Support 00:14:36.278 NVMe-MI Commands & Effects Log Page: May Support 00:14:36.278 Data Area 4 for Telemetry Log: Not Supported 00:14:36.278 Error Log Page Entries Supported: 128 00:14:36.278 Keep Alive: Supported 00:14:36.278 Keep Alive Granularity: 10000 ms 00:14:36.278 00:14:36.278 NVM Command Set Attributes 00:14:36.278 ========================== 00:14:36.278 Submission Queue Entry Size 00:14:36.278 Max: 64 00:14:36.278 Min: 64 00:14:36.278 Completion Queue Entry Size 00:14:36.278 Max: 16 00:14:36.278 Min: 16 00:14:36.278 Number of Namespaces: 32 00:14:36.278 Compare Command: Supported 00:14:36.278 Write Uncorrectable Command: Not Supported 00:14:36.278 Dataset Management Command: Supported 00:14:36.278 Write Zeroes Command: Supported 00:14:36.278 Set Features Save Field: Not Supported 00:14:36.278 Reservations: Supported 00:14:36.278 Timestamp: Not Supported 00:14:36.278 Copy: Supported 00:14:36.278 Volatile Write Cache: Present 00:14:36.278 Atomic Write Unit (Normal): 1 00:14:36.278 Atomic Write Unit (PFail): 1 00:14:36.278 Atomic Compare & Write Unit: 1 00:14:36.278 Fused Compare & Write: Supported 00:14:36.278 Scatter-Gather List 00:14:36.278 SGL Command Set: Supported 00:14:36.278 SGL Keyed: Supported 00:14:36.278 SGL Bit Bucket Descriptor: Not Supported 00:14:36.278 SGL Metadata Pointer: Not Supported 00:14:36.278 Oversized SGL: Not Supported 00:14:36.278 SGL Metadata Address: Not Supported 00:14:36.278 SGL Offset: Supported 00:14:36.278 Transport SGL Data Block: Not Supported 00:14:36.278 Replay Protected Memory Block: Not Supported 00:14:36.278 00:14:36.278 Firmware Slot Information 00:14:36.278 ========================= 00:14:36.278 Active slot: 1 00:14:36.278 Slot 1 Firmware Revision: 25.01 00:14:36.278 00:14:36.278 00:14:36.278 Commands Supported and Effects 00:14:36.278 ============================== 00:14:36.278 Admin Commands 00:14:36.278 -------------- 00:14:36.278 Get Log Page (02h): Supported 00:14:36.278 Identify (06h): Supported 00:14:36.278 Abort (08h): Supported 00:14:36.278 Set Features (09h): Supported 00:14:36.278 Get Features (0Ah): Supported 00:14:36.278 Asynchronous Event Request (0Ch): Supported 00:14:36.278 Keep Alive (18h): Supported 00:14:36.278 I/O Commands 00:14:36.278 ------------ 00:14:36.278 Flush (00h): Supported LBA-Change 00:14:36.278 Write (01h): Supported LBA-Change 00:14:36.278 Read (02h): Supported 00:14:36.278 Compare (05h): Supported 00:14:36.278 Write Zeroes (08h): Supported LBA-Change 00:14:36.278 Dataset Management (09h): Supported LBA-Change 00:14:36.278 Copy (19h): Supported LBA-Change 00:14:36.278 00:14:36.278 Error Log 00:14:36.278 ========= 00:14:36.278 00:14:36.278 Arbitration 00:14:36.278 =========== 00:14:36.278 Arbitration Burst: 1 00:14:36.278 00:14:36.278 Power Management 00:14:36.278 ================ 00:14:36.278 Number of Power States: 1 00:14:36.278 Current Power State: Power State #0 00:14:36.278 Power State #0: 00:14:36.278 Max Power: 0.00 W 00:14:36.278 Non-Operational State: Operational 00:14:36.278 Entry Latency: Not Reported 00:14:36.278 Exit Latency: Not Reported 00:14:36.278 Relative Read Throughput: 0 00:14:36.278 Relative Read Latency: 0 00:14:36.278 Relative Write Throughput: 0 00:14:36.278 Relative Write Latency: 0 00:14:36.278 Idle Power: Not Reported 00:14:36.278 Active Power: Not Reported 00:14:36.278 Non-Operational Permissive Mode: Not Supported 00:14:36.278 00:14:36.278 Health Information 00:14:36.278 ================== 00:14:36.278 Critical Warnings: 00:14:36.278 Available Spare Space: OK 00:14:36.278 Temperature: OK 00:14:36.278 Device Reliability: OK 00:14:36.278 Read Only: No 00:14:36.278 Volatile Memory Backup: OK 00:14:36.278 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:36.278 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:36.278 Available Spare: 0% 00:14:36.278 Available Spare Threshold: 0% 00:14:36.278 Life Percentage Used:[2024-11-12 10:36:24.876682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.278 [2024-11-12 10:36:24.876690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdcc750) 00:14:36.278 [2024-11-12 10:36:24.876700] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.278 [2024-11-12 10:36:24.876728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe311c0, cid 7, qid 0 00:14:36.278 [2024-11-12 10:36:24.877159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.278 [2024-11-12 10:36:24.877176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.278 [2024-11-12 10:36:24.877196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.278 [2024-11-12 10:36:24.877201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe311c0) on tqpair=0xdcc750 00:14:36.278 [2024-11-12 10:36:24.877260] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:14:36.278 [2024-11-12 10:36:24.877274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30740) on tqpair=0xdcc750 00:14:36.278 [2024-11-12 10:36:24.877281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.278 [2024-11-12 10:36:24.877287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe308c0) on tqpair=0xdcc750 00:14:36.278 [2024-11-12 10:36:24.877292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.278 [2024-11-12 10:36:24.877297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30a40) on tqpair=0xdcc750 00:14:36.278 [2024-11-12 10:36:24.877302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.278 [2024-11-12 10:36:24.877308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30bc0) on tqpair=0xdcc750 00:14:36.278 [2024-11-12 10:36:24.877312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.278 [2024-11-12 10:36:24.877322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.278 [2024-11-12 10:36:24.877327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.278 [2024-11-12 10:36:24.877331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcc750) 00:14:36.278 [2024-11-12 10:36:24.877339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.278 [2024-11-12 10:36:24.877365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30bc0, cid 3, qid 0 00:14:36.278 [2024-11-12 10:36:24.877729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.278 [2024-11-12 10:36:24.877746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.278 [2024-11-12 10:36:24.877750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.278 [2024-11-12 10:36:24.877755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30bc0) on tqpair=0xdcc750 00:14:36.278 [2024-11-12 10:36:24.877763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.877768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.877772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcc750) 00:14:36.279 [2024-11-12 10:36:24.877780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.279 [2024-11-12 10:36:24.877804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30bc0, cid 3, qid 0 00:14:36.279 [2024-11-12 10:36:24.878094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.279 [2024-11-12 10:36:24.878109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.279 [2024-11-12 10:36:24.878114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.878118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30bc0) on tqpair=0xdcc750 00:14:36.279 [2024-11-12 10:36:24.878124] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:14:36.279 [2024-11-12 10:36:24.878129] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:14:36.279 [2024-11-12 10:36:24.878140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.878145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.878149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcc750) 00:14:36.279 [2024-11-12 10:36:24.878157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.279 [2024-11-12 10:36:24.878189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30bc0, cid 3, qid 0 00:14:36.279 [2024-11-12 10:36:24.878357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.279 [2024-11-12 10:36:24.878365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.279 [2024-11-12 10:36:24.878368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.878373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30bc0) on tqpair=0xdcc750 00:14:36.279 [2024-11-12 10:36:24.878384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.878389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.878394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcc750) 00:14:36.279 [2024-11-12 10:36:24.878401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.279 [2024-11-12 10:36:24.878422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30bc0, cid 3, qid 0 00:14:36.279 [2024-11-12 10:36:24.878525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.279 [2024-11-12 10:36:24.878532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.279 [2024-11-12 10:36:24.878536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.878540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30bc0) on tqpair=0xdcc750 00:14:36.279 [2024-11-12 10:36:24.878551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.878556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.878560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcc750) 00:14:36.279 [2024-11-12 10:36:24.878568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.279 [2024-11-12 10:36:24.878587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30bc0, cid 3, qid 0 00:14:36.279 [2024-11-12 10:36:24.878927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.279 [2024-11-12 10:36:24.878943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.279 [2024-11-12 10:36:24.878947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.878952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30bc0) on tqpair=0xdcc750 00:14:36.279 [2024-11-12 10:36:24.878963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.878968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.878972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcc750) 00:14:36.279 [2024-11-12 10:36:24.878979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.279 [2024-11-12 10:36:24.878999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30bc0, cid 3, qid 0 00:14:36.279 [2024-11-12 10:36:24.879048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.279 [2024-11-12 10:36:24.879055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.279 [2024-11-12 10:36:24.879058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.879063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30bc0) on tqpair=0xdcc750 00:14:36.279 [2024-11-12 10:36:24.879073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.879078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.879082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcc750) 00:14:36.279 [2024-11-12 10:36:24.879090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.279 [2024-11-12 10:36:24.879140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30bc0, cid 3, qid 0 00:14:36.279 [2024-11-12 10:36:24.879493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.279 [2024-11-12 10:36:24.879523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.279 [2024-11-12 10:36:24.879527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.879532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30bc0) on tqpair=0xdcc750 00:14:36.279 [2024-11-12 10:36:24.879544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.879549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.879553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcc750) 00:14:36.279 [2024-11-12 10:36:24.879561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.279 [2024-11-12 10:36:24.879582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30bc0, cid 3, qid 0 00:14:36.279 [2024-11-12 10:36:24.879628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.279 [2024-11-12 10:36:24.879635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.279 [2024-11-12 10:36:24.879639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.879643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30bc0) on tqpair=0xdcc750 00:14:36.279 [2024-11-12 10:36:24.879654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.879659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.879663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcc750) 00:14:36.279 [2024-11-12 10:36:24.879670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.279 [2024-11-12 10:36:24.879689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30bc0, cid 3, qid 0 00:14:36.279 [2024-11-12 10:36:24.880034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.279 [2024-11-12 10:36:24.880049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.279 [2024-11-12 10:36:24.880054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.880058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30bc0) on tqpair=0xdcc750 00:14:36.279 [2024-11-12 10:36:24.880069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.880074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.880078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcc750) 00:14:36.279 [2024-11-12 10:36:24.880086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.279 [2024-11-12 10:36:24.880105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30bc0, cid 3, qid 0 00:14:36.279 [2024-11-12 10:36:24.884230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.279 [2024-11-12 10:36:24.884245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.279 [2024-11-12 10:36:24.884250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.884254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30bc0) on tqpair=0xdcc750 00:14:36.279 [2024-11-12 10:36:24.884268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.884274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.884278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdcc750) 00:14:36.279 [2024-11-12 10:36:24.884287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.279 [2024-11-12 10:36:24.884312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe30bc0, cid 3, qid 0 00:14:36.279 [2024-11-12 10:36:24.884369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:36.279 [2024-11-12 10:36:24.884376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:36.279 [2024-11-12 10:36:24.884380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:36.279 [2024-11-12 10:36:24.884384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe30bc0) on tqpair=0xdcc750 00:14:36.279 [2024-11-12 10:36:24.884392] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:14:36.279 0% 00:14:36.279 Data Units Read: 0 00:14:36.279 Data Units Written: 0 00:14:36.279 Host Read Commands: 0 00:14:36.279 Host Write Commands: 0 00:14:36.279 Controller Busy Time: 0 minutes 00:14:36.279 Power Cycles: 0 00:14:36.279 Power On Hours: 0 hours 00:14:36.279 Unsafe Shutdowns: 0 00:14:36.279 Unrecoverable Media Errors: 0 00:14:36.279 Lifetime Error Log Entries: 0 00:14:36.279 Warning Temperature Time: 0 minutes 00:14:36.279 Critical Temperature Time: 0 minutes 00:14:36.279 00:14:36.279 Number of Queues 00:14:36.279 ================ 00:14:36.279 Number of I/O Submission Queues: 127 00:14:36.279 Number of I/O Completion Queues: 127 00:14:36.279 00:14:36.279 Active Namespaces 00:14:36.279 ================= 00:14:36.279 Namespace ID:1 00:14:36.279 Error Recovery Timeout: Unlimited 00:14:36.279 Command Set Identifier: NVM (00h) 00:14:36.279 Deallocate: Supported 00:14:36.279 Deallocated/Unwritten Error: Not Supported 00:14:36.279 Deallocated Read Value: Unknown 00:14:36.279 Deallocate in Write Zeroes: Not Supported 00:14:36.279 Deallocated Guard Field: 0xFFFF 00:14:36.279 Flush: Supported 00:14:36.279 Reservation: Supported 00:14:36.279 Namespace Sharing Capabilities: Multiple Controllers 00:14:36.280 Size (in LBAs): 131072 (0GiB) 00:14:36.280 Capacity (in LBAs): 131072 (0GiB) 00:14:36.280 Utilization (in LBAs): 131072 (0GiB) 00:14:36.280 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:36.280 EUI64: ABCDEF0123456789 00:14:36.280 UUID: e3c26b45-6436-44d7-8dc0-ed740d0f3f34 00:14:36.280 Thin Provisioning: Not Supported 00:14:36.280 Per-NS Atomic Units: Yes 00:14:36.280 Atomic Boundary Size (Normal): 0 00:14:36.280 Atomic Boundary Size (PFail): 0 00:14:36.280 Atomic Boundary Offset: 0 00:14:36.280 Maximum Single Source Range Length: 65535 00:14:36.280 Maximum Copy Length: 65535 00:14:36.280 Maximum Source Range Count: 1 00:14:36.280 NGUID/EUI64 Never Reused: No 00:14:36.280 Namespace Write Protected: No 00:14:36.280 Number of LBA Formats: 1 00:14:36.280 Current LBA Format: LBA Format #00 00:14:36.280 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:36.280 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:36.280 10:36:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:36.280 rmmod nvme_tcp 00:14:36.280 rmmod nvme_fabrics 00:14:36.280 rmmod nvme_keyring 00:14:36.280 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:36.280 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:14:36.280 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:14:36.280 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73674 ']' 00:14:36.280 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73674 00:14:36.280 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 73674 ']' 00:14:36.280 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 73674 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73674 00:14:36.539 killing process with pid 73674 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73674' 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 73674 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 73674 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:36.539 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:36.799 00:14:36.799 real 0m2.157s 00:14:36.799 user 0m4.343s 00:14:36.799 sys 0m0.685s 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:36.799 ************************************ 00:14:36.799 END TEST nvmf_identify 00:14:36.799 ************************************ 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:36.799 ************************************ 00:14:36.799 START TEST nvmf_perf 00:14:36.799 ************************************ 00:14:36.799 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:37.059 * Looking for test storage... 00:14:37.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.059 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:37.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.059 --rc genhtml_branch_coverage=1 00:14:37.059 --rc genhtml_function_coverage=1 00:14:37.060 --rc genhtml_legend=1 00:14:37.060 --rc geninfo_all_blocks=1 00:14:37.060 --rc geninfo_unexecuted_blocks=1 00:14:37.060 00:14:37.060 ' 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:37.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.060 --rc genhtml_branch_coverage=1 00:14:37.060 --rc genhtml_function_coverage=1 00:14:37.060 --rc genhtml_legend=1 00:14:37.060 --rc geninfo_all_blocks=1 00:14:37.060 --rc geninfo_unexecuted_blocks=1 00:14:37.060 00:14:37.060 ' 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:37.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.060 --rc genhtml_branch_coverage=1 00:14:37.060 --rc genhtml_function_coverage=1 00:14:37.060 --rc genhtml_legend=1 00:14:37.060 --rc geninfo_all_blocks=1 00:14:37.060 --rc geninfo_unexecuted_blocks=1 00:14:37.060 00:14:37.060 ' 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:37.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.060 --rc genhtml_branch_coverage=1 00:14:37.060 --rc genhtml_function_coverage=1 00:14:37.060 --rc genhtml_legend=1 00:14:37.060 --rc geninfo_all_blocks=1 00:14:37.060 --rc geninfo_unexecuted_blocks=1 00:14:37.060 00:14:37.060 ' 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:37.060 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:37.060 Cannot find device "nvmf_init_br" 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:37.060 Cannot find device "nvmf_init_br2" 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:37.060 Cannot find device "nvmf_tgt_br" 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.060 Cannot find device "nvmf_tgt_br2" 00:14:37.060 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:14:37.061 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:37.061 Cannot find device "nvmf_init_br" 00:14:37.061 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:14:37.061 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:37.061 Cannot find device "nvmf_init_br2" 00:14:37.061 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:14:37.061 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:37.061 Cannot find device "nvmf_tgt_br" 00:14:37.061 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:14:37.061 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:37.061 Cannot find device "nvmf_tgt_br2" 00:14:37.061 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:14:37.061 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:37.319 Cannot find device "nvmf_br" 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:37.320 Cannot find device "nvmf_init_if" 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:37.320 Cannot find device "nvmf_init_if2" 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:37.320 10:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:37.320 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:37.579 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:37.579 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:14:37.579 00:14:37.579 --- 10.0.0.3 ping statistics --- 00:14:37.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.579 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:37.579 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:37.579 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:14:37.579 00:14:37.579 --- 10.0.0.4 ping statistics --- 00:14:37.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.579 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:37.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:37.579 00:14:37.579 --- 10.0.0.1 ping statistics --- 00:14:37.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.579 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:37.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:14:37.579 00:14:37.579 --- 10.0.0.2 ping statistics --- 00:14:37.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.579 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=73922 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 73922 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 73922 ']' 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:37.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:37.579 10:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:37.579 [2024-11-12 10:36:26.188868] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:37.579 [2024-11-12 10:36:26.188975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.838 [2024-11-12 10:36:26.334984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:37.838 [2024-11-12 10:36:26.365637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.838 [2024-11-12 10:36:26.365883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.838 [2024-11-12 10:36:26.366030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.838 [2024-11-12 10:36:26.366146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.838 [2024-11-12 10:36:26.366212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.838 [2024-11-12 10:36:26.367034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.838 [2024-11-12 10:36:26.367222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.838 [2024-11-12 10:36:26.367406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:37.838 [2024-11-12 10:36:26.367415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.838 [2024-11-12 10:36:26.396353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:38.423 10:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:38.423 10:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:14:38.423 10:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:38.423 10:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:38.423 10:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:38.682 10:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.682 10:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:38.682 10:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:38.941 10:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:38.941 10:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:39.200 10:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:39.200 10:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:39.459 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:39.459 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:39.459 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:39.459 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:39.459 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:39.718 [2024-11-12 10:36:28.449487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.977 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:39.977 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:39.977 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:40.237 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:40.237 10:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:40.496 10:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:40.755 [2024-11-12 10:36:29.450805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:40.755 10:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:41.013 10:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:41.013 10:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:41.013 10:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:41.013 10:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:42.390 Initializing NVMe Controllers 00:14:42.390 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:42.390 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:42.390 Initialization complete. Launching workers. 00:14:42.390 ======================================================== 00:14:42.390 Latency(us) 00:14:42.390 Device Information : IOPS MiB/s Average min max 00:14:42.390 PCIE (0000:00:10.0) NSID 1 from core 0: 22619.48 88.36 1414.63 394.46 7707.31 00:14:42.390 ======================================================== 00:14:42.390 Total : 22619.48 88.36 1414.63 394.46 7707.31 00:14:42.390 00:14:42.390 10:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:43.767 Initializing NVMe Controllers 00:14:43.767 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:43.767 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:43.767 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:43.767 Initialization complete. Launching workers. 00:14:43.767 ======================================================== 00:14:43.767 Latency(us) 00:14:43.767 Device Information : IOPS MiB/s Average min max 00:14:43.767 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3944.99 15.41 253.15 92.05 7138.09 00:14:43.767 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 126.00 0.49 7998.59 5722.58 14086.91 00:14:43.767 ======================================================== 00:14:43.767 Total : 4070.99 15.90 492.88 92.05 14086.91 00:14:43.767 00:14:43.767 10:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:44.703 Initializing NVMe Controllers 00:14:44.703 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.703 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:44.703 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:44.703 Initialization complete. Launching workers. 00:14:44.703 ======================================================== 00:14:44.703 Latency(us) 00:14:44.703 Device Information : IOPS MiB/s Average min max 00:14:44.703 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9100.52 35.55 3528.74 549.11 7969.52 00:14:44.703 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3981.79 15.55 8074.16 5147.55 15461.89 00:14:44.703 ======================================================== 00:14:44.703 Total : 13082.31 51.10 4912.20 549.11 15461.89 00:14:44.703 00:14:44.963 10:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:44.963 10:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:47.496 Initializing NVMe Controllers 00:14:47.496 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:47.496 Controller IO queue size 128, less than required. 00:14:47.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:47.496 Controller IO queue size 128, less than required. 00:14:47.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:47.496 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:47.496 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:47.496 Initialization complete. Launching workers. 00:14:47.496 ======================================================== 00:14:47.496 Latency(us) 00:14:47.496 Device Information : IOPS MiB/s Average min max 00:14:47.496 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1925.55 481.39 67521.81 35747.92 110968.70 00:14:47.496 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 676.59 169.15 193712.15 58133.58 309600.92 00:14:47.496 ======================================================== 00:14:47.496 Total : 2602.14 650.53 100332.75 35747.92 309600.92 00:14:47.496 00:14:47.496 10:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:14:47.496 Initializing NVMe Controllers 00:14:47.496 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:47.496 Controller IO queue size 128, less than required. 00:14:47.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:47.496 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:47.496 Controller IO queue size 128, less than required. 00:14:47.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:47.496 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:47.496 WARNING: Some requested NVMe devices were skipped 00:14:47.496 No valid NVMe controllers or AIO or URING devices found 00:14:47.496 10:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:14:50.033 Initializing NVMe Controllers 00:14:50.033 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:50.033 Controller IO queue size 128, less than required. 00:14:50.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.033 Controller IO queue size 128, less than required. 00:14:50.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.033 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:50.033 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:50.033 Initialization complete. Launching workers. 00:14:50.033 00:14:50.033 ==================== 00:14:50.033 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:50.033 TCP transport: 00:14:50.033 polls: 13042 00:14:50.033 idle_polls: 9050 00:14:50.033 sock_completions: 3992 00:14:50.033 nvme_completions: 7189 00:14:50.033 submitted_requests: 10762 00:14:50.033 queued_requests: 1 00:14:50.033 00:14:50.033 ==================== 00:14:50.033 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:50.033 TCP transport: 00:14:50.033 polls: 15739 00:14:50.033 idle_polls: 11750 00:14:50.033 sock_completions: 3989 00:14:50.033 nvme_completions: 6687 00:14:50.033 submitted_requests: 10042 00:14:50.033 queued_requests: 1 00:14:50.033 ======================================================== 00:14:50.033 Latency(us) 00:14:50.033 Device Information : IOPS MiB/s Average min max 00:14:50.033 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1796.67 449.17 72345.21 36169.04 115852.49 00:14:50.033 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1671.19 417.80 77202.95 25376.34 135445.10 00:14:50.033 ======================================================== 00:14:50.033 Total : 3467.86 866.96 74686.20 25376.34 135445.10 00:14:50.033 00:14:50.033 10:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:50.033 10:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.601 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:50.602 rmmod nvme_tcp 00:14:50.602 rmmod nvme_fabrics 00:14:50.602 rmmod nvme_keyring 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 73922 ']' 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 73922 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 73922 ']' 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 73922 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73922 00:14:50.602 killing process with pid 73922 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73922' 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 73922 00:14:50.602 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 73922 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:51.169 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:51.429 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:51.429 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.429 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.429 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:51.429 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.429 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.429 10:36:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.429 10:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:14:51.429 ************************************ 00:14:51.429 END TEST nvmf_perf 00:14:51.429 ************************************ 00:14:51.429 00:14:51.429 real 0m14.526s 00:14:51.429 user 0m52.694s 00:14:51.429 sys 0m3.975s 00:14:51.429 10:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:51.429 10:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:51.429 10:36:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:51.429 10:36:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:51.429 10:36:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:51.429 10:36:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:51.429 ************************************ 00:14:51.429 START TEST nvmf_fio_host 00:14:51.429 ************************************ 00:14:51.429 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:51.429 * Looking for test storage... 00:14:51.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:51.429 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:51.429 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:14:51.429 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:51.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.689 --rc genhtml_branch_coverage=1 00:14:51.689 --rc genhtml_function_coverage=1 00:14:51.689 --rc genhtml_legend=1 00:14:51.689 --rc geninfo_all_blocks=1 00:14:51.689 --rc geninfo_unexecuted_blocks=1 00:14:51.689 00:14:51.689 ' 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:51.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.689 --rc genhtml_branch_coverage=1 00:14:51.689 --rc genhtml_function_coverage=1 00:14:51.689 --rc genhtml_legend=1 00:14:51.689 --rc geninfo_all_blocks=1 00:14:51.689 --rc geninfo_unexecuted_blocks=1 00:14:51.689 00:14:51.689 ' 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:51.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.689 --rc genhtml_branch_coverage=1 00:14:51.689 --rc genhtml_function_coverage=1 00:14:51.689 --rc genhtml_legend=1 00:14:51.689 --rc geninfo_all_blocks=1 00:14:51.689 --rc geninfo_unexecuted_blocks=1 00:14:51.689 00:14:51.689 ' 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:51.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.689 --rc genhtml_branch_coverage=1 00:14:51.689 --rc genhtml_function_coverage=1 00:14:51.689 --rc genhtml_legend=1 00:14:51.689 --rc geninfo_all_blocks=1 00:14:51.689 --rc geninfo_unexecuted_blocks=1 00:14:51.689 00:14:51.689 ' 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.689 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:51.690 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:51.690 Cannot find device "nvmf_init_br" 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:51.690 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:51.691 Cannot find device "nvmf_init_br2" 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:51.691 Cannot find device "nvmf_tgt_br" 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.691 Cannot find device "nvmf_tgt_br2" 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:51.691 Cannot find device "nvmf_init_br" 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:51.691 Cannot find device "nvmf_init_br2" 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:51.691 Cannot find device "nvmf_tgt_br" 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:51.691 Cannot find device "nvmf_tgt_br2" 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:51.691 Cannot find device "nvmf_br" 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:51.691 Cannot find device "nvmf_init_if" 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:14:51.691 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:51.950 Cannot find device "nvmf_init_if2" 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.950 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:52.209 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:52.209 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.209 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:14:52.209 00:14:52.209 --- 10.0.0.3 ping statistics --- 00:14:52.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.209 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:52.209 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:52.209 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:52.209 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:14:52.209 00:14:52.209 --- 10.0.0.4 ping statistics --- 00:14:52.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.209 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:52.209 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:52.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:52.209 00:14:52.209 --- 10.0.0.1 ping statistics --- 00:14:52.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.209 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:52.209 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:52.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:14:52.209 00:14:52.209 --- 10.0.0.2 ping statistics --- 00:14:52.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.209 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:52.209 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.209 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:14:52.209 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:52.209 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.209 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:52.209 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:52.209 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74384 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74384 00:14:52.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 74384 ']' 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:52.210 10:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:52.210 [2024-11-12 10:36:40.818349] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:14:52.210 [2024-11-12 10:36:40.818447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.468 [2024-11-12 10:36:40.970550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:52.468 [2024-11-12 10:36:41.010112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.468 [2024-11-12 10:36:41.010205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.468 [2024-11-12 10:36:41.010232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.468 [2024-11-12 10:36:41.010242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.468 [2024-11-12 10:36:41.010250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.468 [2024-11-12 10:36:41.011209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.468 [2024-11-12 10:36:41.011988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.468 [2024-11-12 10:36:41.012167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.468 [2024-11-12 10:36:41.012172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.468 [2024-11-12 10:36:41.046554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:52.468 10:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:52.468 10:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:14:52.468 10:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:52.726 [2024-11-12 10:36:41.350794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.726 10:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:52.726 10:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:52.726 10:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:52.726 10:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:52.984 Malloc1 00:14:52.984 10:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.242 10:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:53.500 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:53.759 [2024-11-12 10:36:42.384950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:53.759 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:54.018 10:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:54.276 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:54.276 fio-3.35 00:14:54.276 Starting 1 thread 00:14:56.938 00:14:56.938 test: (groupid=0, jobs=1): err= 0: pid=74454: Tue Nov 12 10:36:45 2024 00:14:56.938 read: IOPS=8752, BW=34.2MiB/s (35.8MB/s)(68.6MiB/2007msec) 00:14:56.938 slat (nsec): min=1987, max=304071, avg=2638.66, stdev=3512.06 00:14:56.939 clat (usec): min=2561, max=13626, avg=7598.01, stdev=605.94 00:14:56.939 lat (usec): min=2607, max=13628, avg=7600.65, stdev=605.78 00:14:56.939 clat percentiles (usec): 00:14:56.939 | 1.00th=[ 6259], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:14:56.939 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7570], 60.00th=[ 7701], 00:14:56.939 | 70.00th=[ 7832], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8455], 00:14:56.939 | 99.00th=[ 9241], 99.50th=[ 9896], 99.90th=[11863], 99.95th=[12780], 00:14:56.939 | 99.99th=[13435] 00:14:56.939 bw ( KiB/s): min=34008, max=35760, per=99.99%, avg=35006.00, stdev=846.93, samples=4 00:14:56.939 iops : min= 8502, max= 8940, avg=8751.50, stdev=211.73, samples=4 00:14:56.939 write: IOPS=8752, BW=34.2MiB/s (35.9MB/s)(68.6MiB/2007msec); 0 zone resets 00:14:56.939 slat (usec): min=2, max=245, avg= 2.77, stdev= 2.56 00:14:56.939 clat (usec): min=2410, max=13273, avg=6941.94, stdev=594.50 00:14:56.939 lat (usec): min=2424, max=13275, avg=6944.71, stdev=594.39 00:14:56.939 clat percentiles (usec): 00:14:56.939 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:14:56.939 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7046], 00:14:56.939 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7635], 00:14:56.939 | 99.00th=[ 8979], 99.50th=[ 9896], 99.90th=[11731], 99.95th=[12518], 00:14:56.939 | 99.99th=[12911] 00:14:56.939 bw ( KiB/s): min=34752, max=35504, per=99.98%, avg=35006.00, stdev=337.80, samples=4 00:14:56.939 iops : min= 8688, max= 8876, avg=8751.50, stdev=84.45, samples=4 00:14:56.939 lat (msec) : 4=0.20%, 10=99.34%, 20=0.46% 00:14:56.939 cpu : usr=70.34%, sys=22.03%, ctx=108, majf=0, minf=7 00:14:56.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:56.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:56.939 issued rwts: total=17566,17567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.939 00:14:56.939 Run status group 0 (all jobs): 00:14:56.939 READ: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=68.6MiB (71.9MB), run=2007-2007msec 00:14:56.939 WRITE: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.6MiB (72.0MB), run=2007-2007msec 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:56.939 10:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:56.939 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:56.939 fio-3.35 00:14:56.939 Starting 1 thread 00:14:59.471 00:14:59.471 test: (groupid=0, jobs=1): err= 0: pid=74501: Tue Nov 12 10:36:47 2024 00:14:59.471 read: IOPS=8395, BW=131MiB/s (138MB/s)(263MiB/2006msec) 00:14:59.471 slat (usec): min=2, max=140, avg= 3.71, stdev= 2.42 00:14:59.471 clat (usec): min=2916, max=17560, avg=8359.36, stdev=2479.35 00:14:59.471 lat (usec): min=2920, max=17563, avg=8363.08, stdev=2479.40 00:14:59.471 clat percentiles (usec): 00:14:59.471 | 1.00th=[ 4228], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 6063], 00:14:59.471 | 30.00th=[ 6849], 40.00th=[ 7439], 50.00th=[ 8029], 60.00th=[ 8717], 00:14:59.471 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[11469], 95.00th=[12911], 00:14:59.471 | 99.00th=[15401], 99.50th=[15926], 99.90th=[16909], 99.95th=[17171], 00:14:59.471 | 99.99th=[17433] 00:14:59.471 bw ( KiB/s): min=57056, max=83008, per=51.85%, avg=69656.00, stdev=13934.33, samples=4 00:14:59.471 iops : min= 3566, max= 5188, avg=4353.50, stdev=870.90, samples=4 00:14:59.471 write: IOPS=5167, BW=80.7MiB/s (84.7MB/s)(143MiB/1765msec); 0 zone resets 00:14:59.471 slat (usec): min=32, max=358, avg=37.95, stdev= 9.37 00:14:59.471 clat (usec): min=4641, max=19937, avg=11951.31, stdev=2027.94 00:14:59.471 lat (usec): min=4691, max=19970, avg=11989.25, stdev=2027.65 00:14:59.471 clat percentiles (usec): 00:14:59.471 | 1.00th=[ 8029], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:14:59.471 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:14:59.471 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14484], 95.00th=[15533], 00:14:59.471 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19268], 99.95th=[19530], 00:14:59.471 | 99.99th=[20055] 00:14:59.471 bw ( KiB/s): min=59200, max=85952, per=87.67%, avg=72480.00, stdev=13882.89, samples=4 00:14:59.471 iops : min= 3700, max= 5372, avg=4530.00, stdev=867.68, samples=4 00:14:59.471 lat (msec) : 4=0.31%, 10=54.29%, 20=45.40% 00:14:59.471 cpu : usr=82.84%, sys=13.52%, ctx=3, majf=0, minf=22 00:14:59.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:59.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:59.471 issued rwts: total=16842,9120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:59.471 00:14:59.471 Run status group 0 (all jobs): 00:14:59.471 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=263MiB (276MB), run=2006-2006msec 00:14:59.471 WRITE: bw=80.7MiB/s (84.7MB/s), 80.7MiB/s-80.7MiB/s (84.7MB/s-84.7MB/s), io=143MiB (149MB), run=1765-1765msec 00:14:59.471 10:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.471 10:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:59.471 10:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:59.471 10:36:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:59.471 rmmod nvme_tcp 00:14:59.471 rmmod nvme_fabrics 00:14:59.471 rmmod nvme_keyring 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74384 ']' 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74384 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 74384 ']' 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 74384 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74384 00:14:59.471 killing process with pid 74384 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74384' 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 74384 00:14:59.471 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 74384 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:59.730 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:14:59.990 00:14:59.990 real 0m8.487s 00:14:59.990 user 0m33.530s 00:14:59.990 sys 0m2.321s 00:14:59.990 ************************************ 00:14:59.990 END TEST nvmf_fio_host 00:14:59.990 ************************************ 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:59.990 ************************************ 00:14:59.990 START TEST nvmf_failover 00:14:59.990 ************************************ 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:59.990 * Looking for test storage... 00:14:59.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:14:59.990 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:00.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.250 --rc genhtml_branch_coverage=1 00:15:00.250 --rc genhtml_function_coverage=1 00:15:00.250 --rc genhtml_legend=1 00:15:00.250 --rc geninfo_all_blocks=1 00:15:00.250 --rc geninfo_unexecuted_blocks=1 00:15:00.250 00:15:00.250 ' 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:00.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.250 --rc genhtml_branch_coverage=1 00:15:00.250 --rc genhtml_function_coverage=1 00:15:00.250 --rc genhtml_legend=1 00:15:00.250 --rc geninfo_all_blocks=1 00:15:00.250 --rc geninfo_unexecuted_blocks=1 00:15:00.250 00:15:00.250 ' 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:00.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.250 --rc genhtml_branch_coverage=1 00:15:00.250 --rc genhtml_function_coverage=1 00:15:00.250 --rc genhtml_legend=1 00:15:00.250 --rc geninfo_all_blocks=1 00:15:00.250 --rc geninfo_unexecuted_blocks=1 00:15:00.250 00:15:00.250 ' 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:00.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.250 --rc genhtml_branch_coverage=1 00:15:00.250 --rc genhtml_function_coverage=1 00:15:00.250 --rc genhtml_legend=1 00:15:00.250 --rc geninfo_all_blocks=1 00:15:00.250 --rc geninfo_unexecuted_blocks=1 00:15:00.250 00:15:00.250 ' 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.250 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:00.251 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:00.251 Cannot find device "nvmf_init_br" 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:00.251 Cannot find device "nvmf_init_br2" 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:00.251 Cannot find device "nvmf_tgt_br" 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:00.251 Cannot find device "nvmf_tgt_br2" 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:00.251 Cannot find device "nvmf_init_br" 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:00.251 Cannot find device "nvmf_init_br2" 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:00.251 Cannot find device "nvmf_tgt_br" 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:00.251 Cannot find device "nvmf_tgt_br2" 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:00.251 Cannot find device "nvmf_br" 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:00.251 Cannot find device "nvmf_init_if" 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:00.251 Cannot find device "nvmf_init_if2" 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:00.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:00.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:00.251 10:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:00.510 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:00.511 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:00.511 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:15:00.511 00:15:00.511 --- 10.0.0.3 ping statistics --- 00:15:00.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.511 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:00.511 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:00.511 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.115 ms 00:15:00.511 00:15:00.511 --- 10.0.0.4 ping statistics --- 00:15:00.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.511 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:00.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:15:00.511 00:15:00.511 --- 10.0.0.1 ping statistics --- 00:15:00.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.511 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:00.511 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:00.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:15:00.770 00:15:00.770 --- 10.0.0.2 ping statistics --- 00:15:00.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.770 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=74767 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 74767 00:15:00.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 74767 ']' 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:00.770 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:00.770 [2024-11-12 10:36:49.358440] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:15:00.770 [2024-11-12 10:36:49.358756] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.770 [2024-11-12 10:36:49.504947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:01.029 [2024-11-12 10:36:49.535869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.029 [2024-11-12 10:36:49.535937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.029 [2024-11-12 10:36:49.535964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.029 [2024-11-12 10:36:49.535971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.029 [2024-11-12 10:36:49.535978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.029 [2024-11-12 10:36:49.536755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.029 [2024-11-12 10:36:49.536839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.029 [2024-11-12 10:36:49.536857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.029 [2024-11-12 10:36:49.565098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:01.029 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:01.029 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:15:01.029 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:01.029 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:01.029 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:01.029 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.029 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:01.365 [2024-11-12 10:36:49.948004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.365 10:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:01.624 Malloc0 00:15:01.624 10:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:01.882 10:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:02.141 10:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:02.400 [2024-11-12 10:36:51.090759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:02.400 10:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:02.658 [2024-11-12 10:36:51.338961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:02.658 10:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:02.917 [2024-11-12 10:36:51.583177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:02.917 10:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74817 00:15:02.917 10:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:02.917 10:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:02.917 10:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74817 /var/tmp/bdevperf.sock 00:15:02.917 10:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 74817 ']' 00:15:02.917 10:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.917 10:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:02.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.917 10:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.917 10:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:02.917 10:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:04.293 10:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:04.293 10:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:15:04.293 10:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:04.552 NVMe0n1 00:15:04.552 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:04.811 00:15:04.811 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=74841 00:15:04.811 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:04.811 10:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:05.747 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:06.314 10:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:09.600 10:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:09.600 00:15:09.600 10:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:09.859 [2024-11-12 10:36:58.439940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b4a30 is same with the state(6) to be set 00:15:09.859 10:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:13.144 10:37:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:13.144 [2024-11-12 10:37:01.768127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:13.144 10:37:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:14.078 10:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:14.644 10:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 74841 00:15:19.917 { 00:15:19.917 "results": [ 00:15:19.917 { 00:15:19.917 "job": "NVMe0n1", 00:15:19.917 "core_mask": "0x1", 00:15:19.917 "workload": "verify", 00:15:19.917 "status": "finished", 00:15:19.917 "verify_range": { 00:15:19.917 "start": 0, 00:15:19.917 "length": 16384 00:15:19.917 }, 00:15:19.917 "queue_depth": 128, 00:15:19.917 "io_size": 4096, 00:15:19.917 "runtime": 15.012076, 00:15:19.917 "iops": 9138.10987900674, 00:15:19.917 "mibps": 35.69574171487008, 00:15:19.917 "io_failed": 3237, 00:15:19.917 "io_timeout": 0, 00:15:19.917 "avg_latency_us": 13652.622552684854, 00:15:19.917 "min_latency_us": 592.0581818181818, 00:15:19.917 "max_latency_us": 32648.843636363636 00:15:19.917 } 00:15:19.917 ], 00:15:19.917 "core_count": 1 00:15:19.917 } 00:15:19.917 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 74817 00:15:19.917 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 74817 ']' 00:15:19.917 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 74817 00:15:19.917 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:15:19.917 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:19.917 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74817 00:15:19.917 killing process with pid 74817 00:15:19.917 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:19.917 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:19.917 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74817' 00:15:19.917 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 74817 00:15:19.917 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 74817 00:15:20.183 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:20.183 [2024-11-12 10:36:51.652118] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:15:20.183 [2024-11-12 10:36:51.652237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74817 ] 00:15:20.183 [2024-11-12 10:36:51.802367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.183 [2024-11-12 10:36:51.842666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.183 [2024-11-12 10:36:51.876326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:20.183 Running I/O for 15 seconds... 00:15:20.183 6970.00 IOPS, 27.23 MiB/s [2024-11-12T10:37:08.941Z] [2024-11-12 10:36:54.745906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.183 [2024-11-12 10:36:54.745970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.745999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.183 [2024-11-12 10:36:54.746017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.183 [2024-11-12 10:36:54.746048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.183 [2024-11-12 10:36:54.746079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.183 [2024-11-12 10:36:54.746109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.183 [2024-11-12 10:36:54.746139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.183 [2024-11-12 10:36:54.746170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.183 [2024-11-12 10:36:54.746219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.183 [2024-11-12 10:36:54.746838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.183 [2024-11-12 10:36:54.746852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.746868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.746883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.746899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.746913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.746929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.746944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.746960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.746975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.746991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.747005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.747326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.747356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.747386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.747417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.747448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.747478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.747508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.184 [2024-11-12 10:36:54.747538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.184 [2024-11-12 10:36:54.747964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.184 [2024-11-12 10:36:54.747988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.185 [2024-11-12 10:36:54.748460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.185 [2024-11-12 10:36:54.748499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.185 [2024-11-12 10:36:54.748531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.185 [2024-11-12 10:36:54.748562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.185 [2024-11-12 10:36:54.748592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.185 [2024-11-12 10:36:54.748623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.185 [2024-11-12 10:36:54.748654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.185 [2024-11-12 10:36:54.748684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.748975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.748991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.749005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.749021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.749041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.749064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.749084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.749106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.749130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.749147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.749162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.749190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.749208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.749224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.185 [2024-11-12 10:36:54.749239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.185 [2024-11-12 10:36:54.749256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.185 [2024-11-12 10:36:54.749271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.186 [2024-11-12 10:36:54.749710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadfc0 is same with the state(6) to be set 00:15:20.186 [2024-11-12 10:36:54.749759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.186 [2024-11-12 10:36:54.749775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.186 [2024-11-12 10:36:54.749787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69192 len:8 PRP1 0x0 PRP2 0x0 00:15:20.186 [2024-11-12 10:36:54.749802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.186 [2024-11-12 10:36:54.749828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.186 [2024-11-12 10:36:54.749839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69648 len:8 PRP1 0x0 PRP2 0x0 00:15:20.186 [2024-11-12 10:36:54.749853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.186 [2024-11-12 10:36:54.749877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.186 [2024-11-12 10:36:54.749888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69656 len:8 PRP1 0x0 PRP2 0x0 00:15:20.186 [2024-11-12 10:36:54.749904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.186 [2024-11-12 10:36:54.749929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.186 [2024-11-12 10:36:54.749941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69664 len:8 PRP1 0x0 PRP2 0x0 00:15:20.186 [2024-11-12 10:36:54.749954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.749969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.186 [2024-11-12 10:36:54.749979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.186 [2024-11-12 10:36:54.749990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69672 len:8 PRP1 0x0 PRP2 0x0 00:15:20.186 [2024-11-12 10:36:54.750004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.750018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.186 [2024-11-12 10:36:54.750028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.186 [2024-11-12 10:36:54.750039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69680 len:8 PRP1 0x0 PRP2 0x0 00:15:20.186 [2024-11-12 10:36:54.750053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.750068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.186 [2024-11-12 10:36:54.750083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.186 [2024-11-12 10:36:54.750098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69688 len:8 PRP1 0x0 PRP2 0x0 00:15:20.186 [2024-11-12 10:36:54.750112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.750127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.186 [2024-11-12 10:36:54.750137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.186 [2024-11-12 10:36:54.750148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69696 len:8 PRP1 0x0 PRP2 0x0 00:15:20.186 [2024-11-12 10:36:54.750170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.750200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.186 [2024-11-12 10:36:54.750212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.186 [2024-11-12 10:36:54.750223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69704 len:8 PRP1 0x0 PRP2 0x0 00:15:20.186 [2024-11-12 10:36:54.750237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.750251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.186 [2024-11-12 10:36:54.750261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.186 [2024-11-12 10:36:54.750272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69712 len:8 PRP1 0x0 PRP2 0x0 00:15:20.186 [2024-11-12 10:36:54.750286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.750300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.186 [2024-11-12 10:36:54.750311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.186 [2024-11-12 10:36:54.750322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69720 len:8 PRP1 0x0 PRP2 0x0 00:15:20.186 [2024-11-12 10:36:54.750338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.750353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.186 [2024-11-12 10:36:54.750363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.186 [2024-11-12 10:36:54.750375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69728 len:8 PRP1 0x0 PRP2 0x0 00:15:20.186 [2024-11-12 10:36:54.750389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.186 [2024-11-12 10:36:54.750403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.187 [2024-11-12 10:36:54.750413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.187 [2024-11-12 10:36:54.750424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69736 len:8 PRP1 0x0 PRP2 0x0 00:15:20.187 [2024-11-12 10:36:54.750437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:54.750452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.187 [2024-11-12 10:36:54.750462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.187 [2024-11-12 10:36:54.750473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69744 len:8 PRP1 0x0 PRP2 0x0 00:15:20.187 [2024-11-12 10:36:54.750486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:54.750500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.187 [2024-11-12 10:36:54.750511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.187 [2024-11-12 10:36:54.750522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69752 len:8 PRP1 0x0 PRP2 0x0 00:15:20.187 [2024-11-12 10:36:54.750535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:54.750549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.187 [2024-11-12 10:36:54.750560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.187 [2024-11-12 10:36:54.750578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69760 len:8 PRP1 0x0 PRP2 0x0 00:15:20.187 [2024-11-12 10:36:54.750593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:54.750610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.187 [2024-11-12 10:36:54.750621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.187 [2024-11-12 10:36:54.750632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69768 len:8 PRP1 0x0 PRP2 0x0 00:15:20.187 [2024-11-12 10:36:54.750645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:54.750697] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:20.187 [2024-11-12 10:36:54.750759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.187 [2024-11-12 10:36:54.750782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:54.750798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.187 [2024-11-12 10:36:54.750812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:54.750827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.187 [2024-11-12 10:36:54.750840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:54.750858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.187 [2024-11-12 10:36:54.750872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:54.750887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:20.187 [2024-11-12 10:36:54.750945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe11710 (9): Bad file descriptor 00:15:20.187 [2024-11-12 10:36:54.754960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:20.187 [2024-11-12 10:36:54.778688] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:15:20.187 7820.50 IOPS, 30.55 MiB/s [2024-11-12T10:37:08.945Z] 8197.00 IOPS, 32.02 MiB/s [2024-11-12T10:37:08.945Z] 8479.50 IOPS, 33.12 MiB/s [2024-11-12T10:37:08.945Z] [2024-11-12 10:36:58.440444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.187 [2024-11-12 10:36:58.440491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.187 [2024-11-12 10:36:58.440534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.187 [2024-11-12 10:36:58.440566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.187 [2024-11-12 10:36:58.440619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.187 [2024-11-12 10:36:58.440650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.187 [2024-11-12 10:36:58.440707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.187 [2024-11-12 10:36:58.440734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.187 [2024-11-12 10:36:58.440761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.187 [2024-11-12 10:36:58.440787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.187 [2024-11-12 10:36:58.440814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.187 [2024-11-12 10:36:58.440841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.187 [2024-11-12 10:36:58.440868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.187 [2024-11-12 10:36:58.440910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.187 [2024-11-12 10:36:58.440936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.187 [2024-11-12 10:36:58.440962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.440976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.187 [2024-11-12 10:36:58.440988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.441010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.187 [2024-11-12 10:36:58.441023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.441037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.187 [2024-11-12 10:36:58.441050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.187 [2024-11-12 10:36:58.441064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.188 [2024-11-12 10:36:58.441076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.188 [2024-11-12 10:36:58.441103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.188 [2024-11-12 10:36:58.441129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.188 [2024-11-12 10:36:58.441158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.188 [2024-11-12 10:36:58.441185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.188 [2024-11-12 10:36:58.441211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.188 [2024-11-12 10:36:58.441237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.188 [2024-11-12 10:36:58.441297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.188 [2024-11-12 10:36:58.441902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.188 [2024-11-12 10:36:58.441914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.441928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.441945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.441960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.441972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.441986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.441998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.189 [2024-11-12 10:36:58.442689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.189 [2024-11-12 10:36:58.442986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.189 [2024-11-12 10:36:58.442999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.190 [2024-11-12 10:36:58.443025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.190 [2024-11-12 10:36:58.443052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.190 [2024-11-12 10:36:58.443078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.190 [2024-11-12 10:36:58.443131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.190 [2024-11-12 10:36:58.443162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.190 [2024-11-12 10:36:58.443812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae9e0 is same with the state(6) to be set 00:15:20.190 [2024-11-12 10:36:58.443841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.190 [2024-11-12 10:36:58.443852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.190 [2024-11-12 10:36:58.443861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84000 len:8 PRP1 0x0 PRP2 0x0 00:15:20.190 [2024-11-12 10:36:58.443873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.190 [2024-11-12 10:36:58.443897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.190 [2024-11-12 10:36:58.443907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84008 len:8 PRP1 0x0 PRP2 0x0 00:15:20.190 [2024-11-12 10:36:58.443919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.190 [2024-11-12 10:36:58.443940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.190 [2024-11-12 10:36:58.443950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84016 len:8 PRP1 0x0 PRP2 0x0 00:15:20.190 [2024-11-12 10:36:58.443962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.443974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.190 [2024-11-12 10:36:58.443983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.190 [2024-11-12 10:36:58.443993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84024 len:8 PRP1 0x0 PRP2 0x0 00:15:20.190 [2024-11-12 10:36:58.444004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.444019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.190 [2024-11-12 10:36:58.444029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.190 [2024-11-12 10:36:58.444038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84032 len:8 PRP1 0x0 PRP2 0x0 00:15:20.190 [2024-11-12 10:36:58.444057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.190 [2024-11-12 10:36:58.444070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.190 [2024-11-12 10:36:58.444080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.190 [2024-11-12 10:36:58.444089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84488 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84496 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84504 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84512 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84520 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84528 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84536 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84544 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84552 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84560 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84568 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84576 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84584 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.191 [2024-11-12 10:36:58.444706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.191 [2024-11-12 10:36:58.444715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84592 len:8 PRP1 0x0 PRP2 0x0 00:15:20.191 [2024-11-12 10:36:58.444727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444775] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:20.191 [2024-11-12 10:36:58.444830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.191 [2024-11-12 10:36:58.444850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.191 [2024-11-12 10:36:58.444893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.191 [2024-11-12 10:36:58.444920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.191 [2024-11-12 10:36:58.444945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:36:58.444958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:15:20.191 [2024-11-12 10:36:58.445006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe11710 (9): Bad file descriptor 00:15:20.191 [2024-11-12 10:36:58.448593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:15:20.191 [2024-11-12 10:36:58.473639] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:15:20.191 8620.60 IOPS, 33.67 MiB/s [2024-11-12T10:37:08.949Z] 8790.50 IOPS, 34.34 MiB/s [2024-11-12T10:37:08.949Z] 8891.29 IOPS, 34.73 MiB/s [2024-11-12T10:37:08.949Z] 8971.88 IOPS, 35.05 MiB/s [2024-11-12T10:37:08.949Z] 9032.78 IOPS, 35.28 MiB/s [2024-11-12T10:37:08.949Z] [2024-11-12 10:37:03.091484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.191 [2024-11-12 10:37:03.091553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:37:03.091583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.191 [2024-11-12 10:37:03.091600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:37:03.091617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.191 [2024-11-12 10:37:03.091633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:37:03.091649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.191 [2024-11-12 10:37:03.091670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:37:03.091686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.191 [2024-11-12 10:37:03.091700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.191 [2024-11-12 10:37:03.091717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.191 [2024-11-12 10:37:03.091732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.091748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.091762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.091778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.091837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.091870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.091900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.091915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.091929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.091945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.091958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.091974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.091987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.192 [2024-11-12 10:37:03.092668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.092978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.092994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.093008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.192 [2024-11-12 10:37:03.093040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.192 [2024-11-12 10:37:03.093054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.193 [2024-11-12 10:37:03.093217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.193 [2024-11-12 10:37:03.093260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.193 [2024-11-12 10:37:03.093293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.193 [2024-11-12 10:37:03.093324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.193 [2024-11-12 10:37:03.093355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.193 [2024-11-12 10:37:03.093385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.193 [2024-11-12 10:37:03.093416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.193 [2024-11-12 10:37:03.093447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.093942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.093981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.094010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.094026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.094040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.094056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.094070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.094086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.094100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.094116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.193 [2024-11-12 10:37:03.094130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.094145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.193 [2024-11-12 10:37:03.094159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.193 [2024-11-12 10:37:03.094175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.193 [2024-11-12 10:37:03.094205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.094236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.094275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.094308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.094339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.094370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.094408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.194 [2024-11-12 10:37:03.094440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.194 [2024-11-12 10:37:03.094471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.194 [2024-11-12 10:37:03.094501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.194 [2024-11-12 10:37:03.094531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.194 [2024-11-12 10:37:03.094561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.194 [2024-11-12 10:37:03.094606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.194 [2024-11-12 10:37:03.094636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.194 [2024-11-12 10:37:03.094679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.194 [2024-11-12 10:37:03.094708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.194 [2024-11-12 10:37:03.094737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.194 [2024-11-12 10:37:03.094781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.194 [2024-11-12 10:37:03.094810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.094863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.094894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.094926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.094956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.094973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.094987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.095003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.095017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.095033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.095047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.095063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.095078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.095094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.095120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.095137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.095152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.095168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.095192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.095210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.095225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.095240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.095262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.095280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.095294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.095310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.194 [2024-11-12 10:37:03.095325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.194 [2024-11-12 10:37:03.095340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae6a0 is same with the state(6) to be set 00:15:20.195 [2024-11-12 10:37:03.095357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.095368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.095380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51424 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.095393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.095408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.095419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.095431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51944 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.095445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.095459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.095476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.095488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51952 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.095517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.095530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.095541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.095551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51960 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.095564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.095578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.095588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.095599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51968 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.095628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.095642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.095653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.095663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51976 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.095677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.095715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.095725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.095736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51984 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.095749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.095762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.095773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.095783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51992 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.095796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.095809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.095819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.095830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52000 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.095843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.095856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.095867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.095877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52008 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.095890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.095904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.095917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.095928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52016 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.095956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.095970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.095980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.095990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52024 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.096002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.096015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.096025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.096035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52032 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.096048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.096061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.096071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.096097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52040 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.096117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.096148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.096158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.096169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52048 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.096182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.096197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.096207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.096218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52056 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.096231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.096247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:20.195 [2024-11-12 10:37:03.096281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:20.195 [2024-11-12 10:37:03.096300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52064 len:8 PRP1 0x0 PRP2 0x0 00:15:20.195 [2024-11-12 10:37:03.096315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.096369] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:20.195 [2024-11-12 10:37:03.096431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.195 [2024-11-12 10:37:03.096454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.096471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.195 [2024-11-12 10:37:03.096485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.096502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.195 [2024-11-12 10:37:03.096518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.096532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.195 [2024-11-12 10:37:03.096546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.195 [2024-11-12 10:37:03.096561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:15:20.195 [2024-11-12 10:37:03.096616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe11710 (9): Bad file descriptor 00:15:20.195 [2024-11-12 10:37:03.100416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:15:20.195 [2024-11-12 10:37:03.128298] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:15:20.195 9030.20 IOPS, 35.27 MiB/s [2024-11-12T10:37:08.953Z] 9052.18 IOPS, 35.36 MiB/s [2024-11-12T10:37:08.953Z] 9103.83 IOPS, 35.56 MiB/s [2024-11-12T10:37:08.953Z] 9113.85 IOPS, 35.60 MiB/s [2024-11-12T10:37:08.953Z] 9087.86 IOPS, 35.50 MiB/s [2024-11-12T10:37:08.953Z] 9138.53 IOPS, 35.70 MiB/s 00:15:20.195 Latency(us) 00:15:20.195 [2024-11-12T10:37:08.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.196 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:20.196 Verification LBA range: start 0x0 length 0x4000 00:15:20.196 NVMe0n1 : 15.01 9138.11 35.70 215.63 0.00 13652.62 592.06 32648.84 00:15:20.196 [2024-11-12T10:37:08.954Z] =================================================================================================================== 00:15:20.196 [2024-11-12T10:37:08.954Z] Total : 9138.11 35.70 215.63 0.00 13652.62 592.06 32648.84 00:15:20.196 Received shutdown signal, test time was about 15.000000 seconds 00:15:20.196 00:15:20.196 Latency(us) 00:15:20.196 [2024-11-12T10:37:08.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.196 [2024-11-12T10:37:08.954Z] =================================================================================================================== 00:15:20.196 [2024-11-12T10:37:08.954Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:20.196 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:20.196 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:20.196 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:20.196 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75019 00:15:20.196 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:20.196 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75019 /var/tmp/bdevperf.sock 00:15:20.196 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75019 ']' 00:15:20.196 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:20.196 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:20.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:20.196 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:20.196 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:20.196 10:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:20.454 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:20.454 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:15:20.454 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:20.713 [2024-11-12 10:37:09.319656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:20.713 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:20.973 [2024-11-12 10:37:09.559839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:20.973 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:21.232 NVMe0n1 00:15:21.232 10:37:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:21.799 00:15:21.799 10:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:22.058 00:15:22.058 10:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:22.058 10:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:22.317 10:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:22.575 10:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:25.861 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:25.861 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:25.861 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75091 00:15:25.861 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75091 00:15:25.861 10:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:27.238 { 00:15:27.238 "results": [ 00:15:27.238 { 00:15:27.238 "job": "NVMe0n1", 00:15:27.238 "core_mask": "0x1", 00:15:27.238 "workload": "verify", 00:15:27.238 "status": "finished", 00:15:27.238 "verify_range": { 00:15:27.238 "start": 0, 00:15:27.238 "length": 16384 00:15:27.238 }, 00:15:27.238 "queue_depth": 128, 00:15:27.238 "io_size": 4096, 00:15:27.238 "runtime": 1.005345, 00:15:27.238 "iops": 7278.098563179804, 00:15:27.238 "mibps": 28.43007251242111, 00:15:27.238 "io_failed": 0, 00:15:27.238 "io_timeout": 0, 00:15:27.238 "avg_latency_us": 17516.651098438255, 00:15:27.238 "min_latency_us": 2263.970909090909, 00:15:27.238 "max_latency_us": 15609.483636363637 00:15:27.238 } 00:15:27.238 ], 00:15:27.238 "core_count": 1 00:15:27.238 } 00:15:27.238 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:27.238 [2024-11-12 10:37:08.825257] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:15:27.238 [2024-11-12 10:37:08.825379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75019 ] 00:15:27.238 [2024-11-12 10:37:08.966505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.238 [2024-11-12 10:37:09.001651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.238 [2024-11-12 10:37:09.032629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:27.238 [2024-11-12 10:37:11.107850] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:27.238 [2024-11-12 10:37:11.107983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.238 [2024-11-12 10:37:11.108011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.238 [2024-11-12 10:37:11.108029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.238 [2024-11-12 10:37:11.108042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.238 [2024-11-12 10:37:11.108055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.238 [2024-11-12 10:37:11.108067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.238 [2024-11-12 10:37:11.108080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.238 [2024-11-12 10:37:11.108093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.238 [2024-11-12 10:37:11.108106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:15:27.238 [2024-11-12 10:37:11.108172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:15:27.238 [2024-11-12 10:37:11.108201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1839710 (9): Bad file descriptor 00:15:27.238 [2024-11-12 10:37:11.112066] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:15:27.238 Running I/O for 1 seconds... 00:15:27.238 7189.00 IOPS, 28.08 MiB/s 00:15:27.238 Latency(us) 00:15:27.238 [2024-11-12T10:37:15.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.238 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.238 Verification LBA range: start 0x0 length 0x4000 00:15:27.238 NVMe0n1 : 1.01 7278.10 28.43 0.00 0.00 17516.65 2263.97 15609.48 00:15:27.238 [2024-11-12T10:37:15.996Z] =================================================================================================================== 00:15:27.238 [2024-11-12T10:37:15.996Z] Total : 7278.10 28.43 0.00 0.00 17516.65 2263.97 15609.48 00:15:27.238 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:27.238 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:27.238 10:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:27.497 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:27.497 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:27.756 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:28.016 10:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75019 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75019 ']' 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75019 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75019 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:31.317 killing process with pid 75019 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75019' 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75019 00:15:31.317 10:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75019 00:15:31.592 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:31.592 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:31.865 rmmod nvme_tcp 00:15:31.865 rmmod nvme_fabrics 00:15:31.865 rmmod nvme_keyring 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 74767 ']' 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 74767 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 74767 ']' 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 74767 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74767 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:31.865 killing process with pid 74767 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74767' 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 74767 00:15:31.865 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 74767 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.124 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.383 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:32.383 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.383 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.383 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.383 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:32.383 00:15:32.383 real 0m32.319s 00:15:32.383 user 2m5.196s 00:15:32.383 sys 0m5.505s 00:15:32.383 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:32.383 10:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:32.383 ************************************ 00:15:32.383 END TEST nvmf_failover 00:15:32.383 ************************************ 00:15:32.383 10:37:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:32.383 10:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:32.383 10:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:32.383 10:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.383 ************************************ 00:15:32.383 START TEST nvmf_host_discovery 00:15:32.383 ************************************ 00:15:32.383 10:37:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:32.383 * Looking for test storage... 00:15:32.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:32.383 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:32.383 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:15:32.383 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:32.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.643 --rc genhtml_branch_coverage=1 00:15:32.643 --rc genhtml_function_coverage=1 00:15:32.643 --rc genhtml_legend=1 00:15:32.643 --rc geninfo_all_blocks=1 00:15:32.643 --rc geninfo_unexecuted_blocks=1 00:15:32.643 00:15:32.643 ' 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:32.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.643 --rc genhtml_branch_coverage=1 00:15:32.643 --rc genhtml_function_coverage=1 00:15:32.643 --rc genhtml_legend=1 00:15:32.643 --rc geninfo_all_blocks=1 00:15:32.643 --rc geninfo_unexecuted_blocks=1 00:15:32.643 00:15:32.643 ' 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:32.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.643 --rc genhtml_branch_coverage=1 00:15:32.643 --rc genhtml_function_coverage=1 00:15:32.643 --rc genhtml_legend=1 00:15:32.643 --rc geninfo_all_blocks=1 00:15:32.643 --rc geninfo_unexecuted_blocks=1 00:15:32.643 00:15:32.643 ' 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:32.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.643 --rc genhtml_branch_coverage=1 00:15:32.643 --rc genhtml_function_coverage=1 00:15:32.643 --rc genhtml_legend=1 00:15:32.643 --rc geninfo_all_blocks=1 00:15:32.643 --rc geninfo_unexecuted_blocks=1 00:15:32.643 00:15:32.643 ' 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.643 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:32.644 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:32.644 Cannot find device "nvmf_init_br" 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:32.644 Cannot find device "nvmf_init_br2" 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:32.644 Cannot find device "nvmf_tgt_br" 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.644 Cannot find device "nvmf_tgt_br2" 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:32.644 Cannot find device "nvmf_init_br" 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:32.644 Cannot find device "nvmf_init_br2" 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:32.644 Cannot find device "nvmf_tgt_br" 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:32.644 Cannot find device "nvmf_tgt_br2" 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:32.644 Cannot find device "nvmf_br" 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:32.644 Cannot find device "nvmf_init_if" 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:32.644 Cannot find device "nvmf_init_if2" 00:15:32.644 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:32.645 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.645 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:32.645 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.645 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:32.645 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:32.645 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:32.645 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:32.645 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:32.904 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:32.904 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:32.904 00:15:32.904 --- 10.0.0.3 ping statistics --- 00:15:32.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.904 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:32.904 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:32.904 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:15:32.904 00:15:32.904 --- 10.0.0.4 ping statistics --- 00:15:32.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.904 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:32.904 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:32.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:32.904 00:15:32.904 --- 10.0.0.1 ping statistics --- 00:15:32.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.905 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:32.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:15:32.905 00:15:32.905 --- 10.0.0.2 ping statistics --- 00:15:32.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.905 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75414 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75414 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 75414 ']' 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:32.905 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.164 [2024-11-12 10:37:21.705934] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:15:33.164 [2024-11-12 10:37:21.706048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.164 [2024-11-12 10:37:21.853563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.164 [2024-11-12 10:37:21.882386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.164 [2024-11-12 10:37:21.882449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.164 [2024-11-12 10:37:21.882475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.164 [2024-11-12 10:37:21.882483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.164 [2024-11-12 10:37:21.882489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.164 [2024-11-12 10:37:21.882810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.164 [2024-11-12 10:37:21.910810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.423 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:33.423 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:15:33.423 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:33.423 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:33.423 10:37:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.423 [2024-11-12 10:37:22.021795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.423 [2024-11-12 10:37:22.029871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.423 null0 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.423 null1 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75433 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75433 /tmp/host.sock 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 75433 ']' 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:33.423 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:33.423 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.423 [2024-11-12 10:37:22.120317] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:15:33.423 [2024-11-12 10:37:22.120417] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75433 ] 00:15:33.682 [2024-11-12 10:37:22.272506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.682 [2024-11-12 10:37:22.312143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.682 [2024-11-12 10:37:22.345539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.682 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.941 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:33.941 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:33.941 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.941 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.941 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.942 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.201 [2024-11-12 10:37:22.770115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:34.201 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.202 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:34.461 10:37:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.461 10:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:15:34.461 10:37:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:15:34.719 [2024-11-12 10:37:23.414611] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:34.719 [2024-11-12 10:37:23.414662] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:34.719 [2024-11-12 10:37:23.414684] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:34.719 [2024-11-12 10:37:23.420659] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:34.719 [2024-11-12 10:37:23.475028] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:34.719 [2024-11-12 10:37:23.476057] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x8adf50:1 started. 00:15:34.978 [2024-11-12 10:37:23.477868] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:34.978 [2024-11-12 10:37:23.477895] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:34.978 [2024-11-12 10:37:23.483206] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x8adf50 was disconnected and freed. delete nvme_qpair. 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.546 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:35.547 [2024-11-12 10:37:24.236848] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x8bc030:1 started. 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:35.547 [2024-11-12 10:37:24.243785] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x8bc030 was disconnected and freed. delete nvme_qpair. 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.547 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.806 [2024-11-12 10:37:24.347908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:35.806 [2024-11-12 10:37:24.349074] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:35.806 [2024-11-12 10:37:24.349114] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:35.806 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:35.807 [2024-11-12 10:37:24.355079] bdev_nvme.c:7306:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.807 [2024-11-12 10:37:24.418938] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:15:35.807 [2024-11-12 10:37:24.418996] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:35.807 [2024-11-12 10:37:24.419008] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:35.807 [2024-11-12 10:37:24.419013] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.807 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.067 [2024-11-12 10:37:24.577450] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:36.067 [2024-11-12 10:37:24.577507] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:36.067 [2024-11-12 10:37:24.583447] bdev_nvme.c:7169:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:15:36.067 [2024-11-12 10:37:24.583484] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:36.067 [2024-11-12 10:37:24.583628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.067 [2024-11-12 10:37:24.583658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.067 [2024-11-12 10:37:24.583669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:36.067 [2024-11-12 10:37:24.583678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.067 [2024-11-12 10:37:24.583687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.067 [2024-11-12 10:37:24.583695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.067 [2024-11-12 10:37:24.583703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.067 [2024-11-12 10:37:24.583711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.067 [2024-11-12 10:37:24.583735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a330 is same with the state(6) to be set 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:15:36.067 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.068 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.327 10:37:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.263 [2024-11-12 10:37:25.995496] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:37.263 [2024-11-12 10:37:25.995545] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:37.264 [2024-11-12 10:37:25.995564] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:37.264 [2024-11-12 10:37:26.001527] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:15:37.523 [2024-11-12 10:37:26.059867] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:15:37.523 [2024-11-12 10:37:26.060621] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x8797e0:1 started. 00:15:37.523 [2024-11-12 10:37:26.062644] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:37.523 [2024-11-12 10:37:26.062706] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:37.523 [2024-11-12 10:37:26.064592] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x8797e0 was disconnected and freed. delete nvme_qpair. 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.523 request: 00:15:37.523 { 00:15:37.523 "name": "nvme", 00:15:37.523 "trtype": "tcp", 00:15:37.523 "traddr": "10.0.0.3", 00:15:37.523 "adrfam": "ipv4", 00:15:37.523 "trsvcid": "8009", 00:15:37.523 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:37.523 "wait_for_attach": true, 00:15:37.523 "method": "bdev_nvme_start_discovery", 00:15:37.523 "req_id": 1 00:15:37.523 } 00:15:37.523 Got JSON-RPC error response 00:15:37.523 response: 00:15:37.523 { 00:15:37.523 "code": -17, 00:15:37.523 "message": "File exists" 00:15:37.523 } 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:37.523 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.524 request: 00:15:37.524 { 00:15:37.524 "name": "nvme_second", 00:15:37.524 "trtype": "tcp", 00:15:37.524 "traddr": "10.0.0.3", 00:15:37.524 "adrfam": "ipv4", 00:15:37.524 "trsvcid": "8009", 00:15:37.524 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:37.524 "wait_for_attach": true, 00:15:37.524 "method": "bdev_nvme_start_discovery", 00:15:37.524 "req_id": 1 00:15:37.524 } 00:15:37.524 Got JSON-RPC error response 00:15:37.524 response: 00:15:37.524 { 00:15:37.524 "code": -17, 00:15:37.524 "message": "File exists" 00:15:37.524 } 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:37.524 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:37.783 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.783 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:37.783 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:37.783 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:37.783 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:37.783 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:37.783 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.783 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:37.783 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.783 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:37.783 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.783 10:37:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.720 [2024-11-12 10:37:27.315025] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:38.720 [2024-11-12 10:37:27.315127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a8cc0 with addr=10.0.0.3, port=8010 00:15:38.720 [2024-11-12 10:37:27.315151] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:38.720 [2024-11-12 10:37:27.315162] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:38.720 [2024-11-12 10:37:27.315172] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:39.656 [2024-11-12 10:37:28.314995] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:39.656 [2024-11-12 10:37:28.315071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a8cc0 with addr=10.0.0.3, port=8010 00:15:39.656 [2024-11-12 10:37:28.315090] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:39.656 [2024-11-12 10:37:28.315108] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:39.656 [2024-11-12 10:37:28.315135] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:40.592 [2024-11-12 10:37:29.314853] bdev_nvme.c:7425:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:15:40.592 request: 00:15:40.592 { 00:15:40.592 "name": "nvme_second", 00:15:40.592 "trtype": "tcp", 00:15:40.592 "traddr": "10.0.0.3", 00:15:40.592 "adrfam": "ipv4", 00:15:40.592 "trsvcid": "8010", 00:15:40.592 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:40.592 "wait_for_attach": false, 00:15:40.592 "attach_timeout_ms": 3000, 00:15:40.592 "method": "bdev_nvme_start_discovery", 00:15:40.592 "req_id": 1 00:15:40.593 } 00:15:40.593 Got JSON-RPC error response 00:15:40.593 response: 00:15:40.593 { 00:15:40.593 "code": -110, 00:15:40.593 "message": "Connection timed out" 00:15:40.593 } 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:40.593 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.851 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:40.851 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:40.851 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75433 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.852 rmmod nvme_tcp 00:15:40.852 rmmod nvme_fabrics 00:15:40.852 rmmod nvme_keyring 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75414 ']' 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75414 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 75414 ']' 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 75414 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75414 00:15:40.852 killing process with pid 75414 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75414' 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 75414 00:15:40.852 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 75414 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:41.108 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:15:41.366 00:15:41.366 real 0m8.959s 00:15:41.366 user 0m16.980s 00:15:41.366 sys 0m1.858s 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.366 ************************************ 00:15:41.366 END TEST nvmf_host_discovery 00:15:41.366 ************************************ 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:41.366 10:37:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.366 ************************************ 00:15:41.366 START TEST nvmf_host_multipath_status 00:15:41.366 ************************************ 00:15:41.366 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:41.366 * Looking for test storage... 00:15:41.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:41.366 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:41.366 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:15:41.366 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:15:41.626 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.627 --rc genhtml_branch_coverage=1 00:15:41.627 --rc genhtml_function_coverage=1 00:15:41.627 --rc genhtml_legend=1 00:15:41.627 --rc geninfo_all_blocks=1 00:15:41.627 --rc geninfo_unexecuted_blocks=1 00:15:41.627 00:15:41.627 ' 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.627 --rc genhtml_branch_coverage=1 00:15:41.627 --rc genhtml_function_coverage=1 00:15:41.627 --rc genhtml_legend=1 00:15:41.627 --rc geninfo_all_blocks=1 00:15:41.627 --rc geninfo_unexecuted_blocks=1 00:15:41.627 00:15:41.627 ' 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.627 --rc genhtml_branch_coverage=1 00:15:41.627 --rc genhtml_function_coverage=1 00:15:41.627 --rc genhtml_legend=1 00:15:41.627 --rc geninfo_all_blocks=1 00:15:41.627 --rc geninfo_unexecuted_blocks=1 00:15:41.627 00:15:41.627 ' 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.627 --rc genhtml_branch_coverage=1 00:15:41.627 --rc genhtml_function_coverage=1 00:15:41.627 --rc genhtml_legend=1 00:15:41.627 --rc geninfo_all_blocks=1 00:15:41.627 --rc geninfo_unexecuted_blocks=1 00:15:41.627 00:15:41.627 ' 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:41.627 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:41.627 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:41.628 Cannot find device "nvmf_init_br" 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:41.628 Cannot find device "nvmf_init_br2" 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:41.628 Cannot find device "nvmf_tgt_br" 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.628 Cannot find device "nvmf_tgt_br2" 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:41.628 Cannot find device "nvmf_init_br" 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:41.628 Cannot find device "nvmf_init_br2" 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:41.628 Cannot find device "nvmf_tgt_br" 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:41.628 Cannot find device "nvmf_tgt_br2" 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:41.628 Cannot find device "nvmf_br" 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:41.628 Cannot find device "nvmf_init_if" 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:41.628 Cannot find device "nvmf_init_if2" 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.628 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:41.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:41.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:15:41.887 00:15:41.887 --- 10.0.0.3 ping statistics --- 00:15:41.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.887 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:41.887 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:42.146 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:42.146 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:15:42.146 00:15:42.146 --- 10.0.0.4 ping statistics --- 00:15:42.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.146 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:42.146 00:15:42.146 --- 10.0.0.1 ping statistics --- 00:15:42.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.146 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:42.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:15:42.146 00:15:42.146 --- 10.0.0.2 ping statistics --- 00:15:42.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.146 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=75928 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 75928 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 75928 ']' 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:42.146 10:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:42.146 [2024-11-12 10:37:30.750146] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:15:42.146 [2024-11-12 10:37:30.750288] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.405 [2024-11-12 10:37:30.902865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:42.405 [2024-11-12 10:37:30.943070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.405 [2024-11-12 10:37:30.943158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.405 [2024-11-12 10:37:30.943173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.405 [2024-11-12 10:37:30.943199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.405 [2024-11-12 10:37:30.943209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.405 [2024-11-12 10:37:30.947229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.405 [2024-11-12 10:37:30.947251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.406 [2024-11-12 10:37:30.982089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:43.341 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:43.341 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:15:43.341 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:43.341 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:43.341 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:43.341 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.341 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=75928 00:15:43.341 10:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:43.599 [2024-11-12 10:37:32.154140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.599 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:43.858 Malloc0 00:15:43.858 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:44.116 10:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:44.685 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:44.944 [2024-11-12 10:37:33.486538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:44.944 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:45.204 [2024-11-12 10:37:33.802821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:45.204 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=75991 00:15:45.204 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:45.204 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:45.204 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 75991 /var/tmp/bdevperf.sock 00:15:45.204 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 75991 ']' 00:15:45.204 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.204 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:45.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.204 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.204 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:45.204 10:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:45.464 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:45.464 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:15:45.464 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:45.722 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:46.288 Nvme0n1 00:15:46.288 10:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:46.547 Nvme0n1 00:15:46.547 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:46.547 10:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:48.447 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:48.447 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:48.705 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:48.963 10:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:50.339 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:50.339 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:50.339 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.339 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:50.339 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.339 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:50.339 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.339 10:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:50.597 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:50.597 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:50.597 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.598 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:50.856 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.856 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:50.856 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.856 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:51.422 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.422 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:51.422 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.422 10:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:51.422 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.422 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:51.422 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:51.422 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.991 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.991 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:51.991 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:52.250 10:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:52.508 10:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:53.446 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:53.446 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:53.446 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.446 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:53.705 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:53.705 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:53.705 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.705 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:54.271 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.271 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:54.271 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.271 10:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:54.528 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.528 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:54.528 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.528 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:54.786 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.786 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:54.786 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.786 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:55.352 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.352 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:55.352 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.352 10:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:55.611 10:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.611 10:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:55.611 10:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:55.869 10:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:15:56.127 10:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:57.064 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:57.064 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:57.064 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.064 10:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:57.631 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.631 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:57.631 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.631 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:57.889 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:57.889 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:57.889 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.889 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:58.147 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.147 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:58.147 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.147 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:58.406 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.406 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:58.406 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.406 10:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:58.665 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.665 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:58.665 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.665 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:58.924 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.924 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:58.924 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:59.184 10:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:59.442 10:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:00.820 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:00.820 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:00.820 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.820 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:00.820 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.820 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:00.820 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.820 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:01.091 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:01.091 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:01.091 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.091 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:01.352 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.352 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:01.352 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.352 10:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:01.610 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.610 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:01.610 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.610 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:01.868 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.868 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:01.868 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.868 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:02.126 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:02.126 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:02.126 10:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:02.385 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:02.644 10:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:04.021 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:04.021 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:04.021 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.021 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:04.021 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:04.021 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:04.021 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.021 10:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:04.280 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:04.280 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:04.280 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:04.280 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.538 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.538 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:04.538 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.538 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:05.104 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.104 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:05.104 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.104 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:05.363 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:05.363 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:05.363 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.363 10:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:05.636 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:05.636 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:05.636 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:05.901 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:06.159 10:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:07.094 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:07.094 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:07.094 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.094 10:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:07.352 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:07.352 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:07.352 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.352 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:07.919 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.919 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:07.919 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.919 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:08.178 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.178 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:08.178 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.178 10:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:08.437 10:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.437 10:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:08.437 10:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.437 10:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:08.696 10:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:08.696 10:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:08.696 10:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.696 10:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:08.955 10:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.955 10:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:09.522 10:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:09.522 10:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:09.781 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:10.040 10:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:10.976 10:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:10.976 10:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:10.976 10:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.976 10:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:11.234 10:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.234 10:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:11.234 10:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.234 10:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:11.492 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.492 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:11.492 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:11.492 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.751 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.751 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:11.751 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.751 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:12.010 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.010 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:12.010 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.010 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:12.268 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.268 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:12.268 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.268 10:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:12.527 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.527 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:12.527 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:12.786 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:13.044 10:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:14.421 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:14.421 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:14.421 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.421 10:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:14.421 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:14.421 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:14.421 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:14.421 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.679 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.679 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:14.679 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:14.679 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.937 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.937 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:14.937 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.937 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:15.195 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.195 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:15.196 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.196 10:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:15.454 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.454 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:15.454 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:15.454 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.712 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.712 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:15.712 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:15.970 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:16.537 10:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:17.473 10:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:17.473 10:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:17.473 10:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.473 10:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:17.732 10:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.732 10:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:17.732 10:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:17.732 10:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.990 10:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.990 10:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:17.990 10:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.990 10:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:18.248 10:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.248 10:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:18.248 10:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:18.248 10:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.507 10:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.507 10:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:18.507 10:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:18.507 10:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.765 10:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.765 10:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:18.765 10:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.765 10:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:19.024 10:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.024 10:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:19.024 10:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:19.282 10:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:19.540 10:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:20.475 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:20.475 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:20.475 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.475 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:20.734 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.734 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:20.734 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.996 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:21.256 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:21.256 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:21.256 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.256 10:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:21.513 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.513 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:21.513 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.513 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:21.772 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.772 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:21.772 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:21.772 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.030 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.030 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:22.030 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.030 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:22.288 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:22.288 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 75991 00:16:22.289 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 75991 ']' 00:16:22.289 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 75991 00:16:22.289 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:16:22.289 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:22.289 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75991 00:16:22.289 killing process with pid 75991 00:16:22.289 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:22.289 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:22.289 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75991' 00:16:22.289 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 75991 00:16:22.289 10:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 75991 00:16:22.289 { 00:16:22.289 "results": [ 00:16:22.289 { 00:16:22.289 "job": "Nvme0n1", 00:16:22.289 "core_mask": "0x4", 00:16:22.289 "workload": "verify", 00:16:22.289 "status": "terminated", 00:16:22.289 "verify_range": { 00:16:22.289 "start": 0, 00:16:22.289 "length": 16384 00:16:22.289 }, 00:16:22.289 "queue_depth": 128, 00:16:22.289 "io_size": 4096, 00:16:22.289 "runtime": 35.747867, 00:16:22.289 "iops": 8561.461862885413, 00:16:22.289 "mibps": 33.443210401896145, 00:16:22.289 "io_failed": 0, 00:16:22.289 "io_timeout": 0, 00:16:22.289 "avg_latency_us": 14919.647010266162, 00:16:22.289 "min_latency_us": 532.48, 00:16:22.289 "max_latency_us": 4026531.84 00:16:22.289 } 00:16:22.289 ], 00:16:22.289 "core_count": 1 00:16:22.289 } 00:16:22.551 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 75991 00:16:22.551 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:22.551 [2024-11-12 10:37:33.880097] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:16:22.551 [2024-11-12 10:37:33.880221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75991 ] 00:16:22.551 [2024-11-12 10:37:34.024000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.551 [2024-11-12 10:37:34.054651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.551 [2024-11-12 10:37:34.082614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:22.551 Running I/O for 90 seconds... 00:16:22.551 7316.00 IOPS, 28.58 MiB/s [2024-11-12T10:38:11.309Z] 7242.50 IOPS, 28.29 MiB/s [2024-11-12T10:38:11.309Z] 7132.33 IOPS, 27.86 MiB/s [2024-11-12T10:38:11.309Z] 7141.25 IOPS, 27.90 MiB/s [2024-11-12T10:38:11.309Z] 7105.00 IOPS, 27.75 MiB/s [2024-11-12T10:38:11.309Z] 7490.00 IOPS, 29.26 MiB/s [2024-11-12T10:38:11.309Z] 7851.43 IOPS, 30.67 MiB/s [2024-11-12T10:38:11.309Z] 8031.00 IOPS, 31.37 MiB/s [2024-11-12T10:38:11.309Z] 8088.00 IOPS, 31.59 MiB/s [2024-11-12T10:38:11.309Z] 8282.30 IOPS, 32.35 MiB/s [2024-11-12T10:38:11.309Z] 8437.73 IOPS, 32.96 MiB/s [2024-11-12T10:38:11.309Z] 8551.33 IOPS, 33.40 MiB/s [2024-11-12T10:38:11.309Z] 8648.85 IOPS, 33.78 MiB/s [2024-11-12T10:38:11.309Z] 8772.93 IOPS, 34.27 MiB/s [2024-11-12T10:38:11.309Z] 8819.53 IOPS, 34.45 MiB/s [2024-11-12T10:38:11.309Z] [2024-11-12 10:37:51.088996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.551 [2024-11-12 10:37:51.089627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.551 [2024-11-12 10:37:51.089660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.551 [2024-11-12 10:37:51.089693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.551 [2024-11-12 10:37:51.089725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.551 [2024-11-12 10:37:51.089758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.551 [2024-11-12 10:37:51.089790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.551 [2024-11-12 10:37:51.089823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.551 [2024-11-12 10:37:51.089856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.089951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.089971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.090004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.090025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.090040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.090087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.551 [2024-11-12 10:37:51.090108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:22.551 [2024-11-12 10:37:51.090130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.552 [2024-11-12 10:37:51.090653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.552 [2024-11-12 10:37:51.090687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.552 [2024-11-12 10:37:51.090722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.552 [2024-11-12 10:37:51.090758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.552 [2024-11-12 10:37:51.090793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.552 [2024-11-12 10:37:51.090828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.552 [2024-11-12 10:37:51.090862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.552 [2024-11-12 10:37:51.090905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.090976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.090996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:22.552 [2024-11-12 10:37:51.091716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.552 [2024-11-12 10:37:51.091731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.091751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.091765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.091785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.091799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.091819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.091834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.091854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.091868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.091888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.091902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.091923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.091937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.091965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.091980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.092016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.092050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.553 [2024-11-12 10:37:51.092792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.092826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.092861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.092895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.092936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.092973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.092993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.093008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.093027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.093042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.093062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.093076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.093096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.093110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.093130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.093147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.093167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.093182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:22.553 [2024-11-12 10:37:51.093202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.553 [2024-11-12 10:37:51.093216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.093250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:37:51.093265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.093285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:37:51.093300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.093337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:37:51.093352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:37:51.094155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:37:51.094840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:37:51.094882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:37:51.094923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:37:51.094965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.094991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:37:51.095006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.095033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:37:51.095048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.095074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:37:51.095089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.095161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:37:51.095179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:37:51.095207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:37:51.095252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:22.554 8791.44 IOPS, 34.34 MiB/s [2024-11-12T10:38:11.312Z] 8274.29 IOPS, 32.32 MiB/s [2024-11-12T10:38:11.312Z] 7814.61 IOPS, 30.53 MiB/s [2024-11-12T10:38:11.312Z] 7403.32 IOPS, 28.92 MiB/s [2024-11-12T10:38:11.312Z] 7075.50 IOPS, 27.64 MiB/s [2024-11-12T10:38:11.312Z] 7152.29 IOPS, 27.94 MiB/s [2024-11-12T10:38:11.312Z] 7244.45 IOPS, 28.30 MiB/s [2024-11-12T10:38:11.312Z] 7334.70 IOPS, 28.65 MiB/s [2024-11-12T10:38:11.312Z] 7516.25 IOPS, 29.36 MiB/s [2024-11-12T10:38:11.312Z] 7697.92 IOPS, 30.07 MiB/s [2024-11-12T10:38:11.312Z] 7877.00 IOPS, 30.77 MiB/s [2024-11-12T10:38:11.312Z] 7955.15 IOPS, 31.07 MiB/s [2024-11-12T10:38:11.312Z] 7991.04 IOPS, 31.21 MiB/s [2024-11-12T10:38:11.312Z] 8035.21 IOPS, 31.39 MiB/s [2024-11-12T10:38:11.312Z] 8107.43 IOPS, 31.67 MiB/s [2024-11-12T10:38:11.312Z] 8255.19 IOPS, 32.25 MiB/s [2024-11-12T10:38:11.312Z] 8367.00 IOPS, 32.68 MiB/s [2024-11-12T10:38:11.312Z] [2024-11-12 10:38:08.162651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:38:08.162724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:38:08.162795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:38:08.162844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:38:08.162869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:38:08.162885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:38:08.162906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.554 [2024-11-12 10:38:08.162921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:38:08.162942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:38:08.162957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:38:08.162978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:38:08.162993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:38:08.163013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:38:08.163028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:38:08.163048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:38:08.163063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:38:08.163083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:38:08.163107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:38:08.163164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:38:08.163181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:38:08.163202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.554 [2024-11-12 10:38:08.163235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:22.554 [2024-11-12 10:38:08.163259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.163275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.163312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.163360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.163402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.163440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.163479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.163533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.163570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.163607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.163644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.163682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.163719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.163755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.163792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.163828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.163877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.163915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.163952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.163973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.163988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.164010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.164025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.164046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.164062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.164083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.164099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.165296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.165329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.165357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.165375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.165397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.165412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.165434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.165449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.165471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.165486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.165522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.165539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.165561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.165576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.165598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.165613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.165635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.555 [2024-11-12 10:38:08.165650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:22.555 [2024-11-12 10:38:08.165671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.555 [2024-11-12 10:38:08.165687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:22.556 [2024-11-12 10:38:08.165709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.556 [2024-11-12 10:38:08.165724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:22.556 [2024-11-12 10:38:08.165746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.556 [2024-11-12 10:38:08.165761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:22.556 [2024-11-12 10:38:08.165801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.556 [2024-11-12 10:38:08.165821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:22.556 [2024-11-12 10:38:08.165844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.556 [2024-11-12 10:38:08.165860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:22.556 [2024-11-12 10:38:08.165881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.556 [2024-11-12 10:38:08.165896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.556 [2024-11-12 10:38:08.165918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.556 [2024-11-12 10:38:08.165933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:22.556 [2024-11-12 10:38:08.165954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.556 [2024-11-12 10:38:08.165969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:22.556 [2024-11-12 10:38:08.165991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.556 [2024-11-12 10:38:08.166016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:22.556 [2024-11-12 10:38:08.166039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.556 [2024-11-12 10:38:08.166055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:22.556 [2024-11-12 10:38:08.166076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.556 [2024-11-12 10:38:08.166092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:22.556 [2024-11-12 10:38:08.166114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.556 [2024-11-12 10:38:08.166129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:22.556 8479.00 IOPS, 33.12 MiB/s [2024-11-12T10:38:11.314Z] 8517.15 IOPS, 33.27 MiB/s [2024-11-12T10:38:11.314Z] 8545.09 IOPS, 33.38 MiB/s [2024-11-12T10:38:11.314Z] Received shutdown signal, test time was about 35.748681 seconds 00:16:22.556 00:16:22.556 Latency(us) 00:16:22.556 [2024-11-12T10:38:11.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.556 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:22.556 Verification LBA range: start 0x0 length 0x4000 00:16:22.556 Nvme0n1 : 35.75 8561.46 33.44 0.00 0.00 14919.65 532.48 4026531.84 00:16:22.556 [2024-11-12T10:38:11.314Z] =================================================================================================================== 00:16:22.556 [2024-11-12T10:38:11.314Z] Total : 8561.46 33.44 0.00 0.00 14919.65 532.48 4026531.84 00:16:22.556 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.815 rmmod nvme_tcp 00:16:22.815 rmmod nvme_fabrics 00:16:22.815 rmmod nvme_keyring 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 75928 ']' 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 75928 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 75928 ']' 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 75928 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75928 00:16:22.815 killing process with pid 75928 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75928' 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 75928 00:16:22.815 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 75928 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:23.074 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.333 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.333 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:23.333 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.333 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.333 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.333 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:23.333 ************************************ 00:16:23.333 END TEST nvmf_host_multipath_status 00:16:23.333 ************************************ 00:16:23.333 00:16:23.333 real 0m41.887s 00:16:23.333 user 2m15.530s 00:16:23.333 sys 0m12.032s 00:16:23.333 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:23.333 10:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:23.333 10:38:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:23.333 10:38:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:23.334 10:38:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:23.334 10:38:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.334 ************************************ 00:16:23.334 START TEST nvmf_discovery_remove_ifc 00:16:23.334 ************************************ 00:16:23.334 10:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:23.334 * Looking for test storage... 00:16:23.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:23.334 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:23.334 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:23.334 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:23.593 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:23.593 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.593 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.593 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.593 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.593 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.593 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.593 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.593 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:23.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.594 --rc genhtml_branch_coverage=1 00:16:23.594 --rc genhtml_function_coverage=1 00:16:23.594 --rc genhtml_legend=1 00:16:23.594 --rc geninfo_all_blocks=1 00:16:23.594 --rc geninfo_unexecuted_blocks=1 00:16:23.594 00:16:23.594 ' 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:23.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.594 --rc genhtml_branch_coverage=1 00:16:23.594 --rc genhtml_function_coverage=1 00:16:23.594 --rc genhtml_legend=1 00:16:23.594 --rc geninfo_all_blocks=1 00:16:23.594 --rc geninfo_unexecuted_blocks=1 00:16:23.594 00:16:23.594 ' 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:23.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.594 --rc genhtml_branch_coverage=1 00:16:23.594 --rc genhtml_function_coverage=1 00:16:23.594 --rc genhtml_legend=1 00:16:23.594 --rc geninfo_all_blocks=1 00:16:23.594 --rc geninfo_unexecuted_blocks=1 00:16:23.594 00:16:23.594 ' 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:23.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.594 --rc genhtml_branch_coverage=1 00:16:23.594 --rc genhtml_function_coverage=1 00:16:23.594 --rc genhtml_legend=1 00:16:23.594 --rc geninfo_all_blocks=1 00:16:23.594 --rc geninfo_unexecuted_blocks=1 00:16:23.594 00:16:23.594 ' 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.594 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.594 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:23.595 Cannot find device "nvmf_init_br" 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:23.595 Cannot find device "nvmf_init_br2" 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:23.595 Cannot find device "nvmf_tgt_br" 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.595 Cannot find device "nvmf_tgt_br2" 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:23.595 Cannot find device "nvmf_init_br" 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:23.595 Cannot find device "nvmf_init_br2" 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:23.595 Cannot find device "nvmf_tgt_br" 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:23.595 Cannot find device "nvmf_tgt_br2" 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:23.595 Cannot find device "nvmf_br" 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:23.595 Cannot find device "nvmf_init_if" 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:23.595 Cannot find device "nvmf_init_if2" 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.595 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:23.855 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.855 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:16:23.855 00:16:23.855 --- 10.0.0.3 ping statistics --- 00:16:23.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.855 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:23.855 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:23.855 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:16:23.855 00:16:23.855 --- 10.0.0.4 ping statistics --- 00:16:23.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.855 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:23.855 00:16:23.855 --- 10.0.0.1 ping statistics --- 00:16:23.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.855 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:23.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:16:23.855 00:16:23.855 --- 10.0.0.2 ping statistics --- 00:16:23.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.855 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=76835 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 76835 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 76835 ']' 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:23.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:23.855 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.122 [2024-11-12 10:38:12.619807] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:16:24.122 [2024-11-12 10:38:12.619892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.122 [2024-11-12 10:38:12.774546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.122 [2024-11-12 10:38:12.815481] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.122 [2024-11-12 10:38:12.815554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.122 [2024-11-12 10:38:12.815567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.122 [2024-11-12 10:38:12.815577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.122 [2024-11-12 10:38:12.815586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.122 [2024-11-12 10:38:12.815959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.122 [2024-11-12 10:38:12.848449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:24.385 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:24.385 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:16:24.385 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:24.385 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:24.385 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.385 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.385 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:24.385 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.385 10:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.385 [2024-11-12 10:38:12.953853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.385 [2024-11-12 10:38:12.961975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:24.385 null0 00:16:24.385 [2024-11-12 10:38:12.993891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:24.385 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.385 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:24.385 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76864 00:16:24.385 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76864 /tmp/host.sock 00:16:24.385 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 76864 ']' 00:16:24.385 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:16:24.385 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:24.385 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:24.385 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:24.385 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:24.385 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.385 [2024-11-12 10:38:13.075701] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:16:24.385 [2024-11-12 10:38:13.075795] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76864 ] 00:16:24.644 [2024-11-12 10:38:13.227083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.644 [2024-11-12 10:38:13.266441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.644 [2024-11-12 10:38:13.364130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.644 10:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:26.021 [2024-11-12 10:38:14.406819] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:26.021 [2024-11-12 10:38:14.406879] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:26.021 [2024-11-12 10:38:14.406901] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:26.021 [2024-11-12 10:38:14.412869] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:26.021 [2024-11-12 10:38:14.467355] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:26.021 [2024-11-12 10:38:14.468479] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x121b0b0:1 started. 00:16:26.021 [2024-11-12 10:38:14.470207] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:26.021 [2024-11-12 10:38:14.470295] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:26.021 [2024-11-12 10:38:14.470319] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:26.021 [2024-11-12 10:38:14.470335] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:26.021 [2024-11-12 10:38:14.470395] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:26.021 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.021 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:26.021 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:26.021 [2024-11-12 10:38:14.475592] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x121b0b0 was disconnected and freed. delete nvme_qpair. 00:16:26.021 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:26.021 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:26.021 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.021 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:26.021 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:26.021 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:26.021 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.021 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:26.021 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:26.022 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:26.022 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:26.022 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:26.022 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:26.022 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.022 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:26.022 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:26.022 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:26.022 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:26.022 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.022 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:26.022 10:38:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:26.988 10:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:26.988 10:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:26.988 10:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.988 10:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:26.988 10:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:26.988 10:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:26.988 10:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:26.988 10:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.988 10:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:26.988 10:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:28.363 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:28.363 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:28.363 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:28.363 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.363 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:28.363 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:28.363 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:28.363 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.363 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:28.363 10:38:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:29.299 10:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:29.300 10:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.300 10:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:29.300 10:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.300 10:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.300 10:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:29.300 10:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:29.300 10:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.300 10:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:29.300 10:38:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:30.235 10:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:30.235 10:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.235 10:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:30.235 10:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.235 10:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.235 10:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:30.235 10:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:30.235 10:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.235 10:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:30.235 10:38:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:31.170 10:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:31.170 10:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.170 10:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.170 10:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.170 10:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:31.170 10:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:31.170 10:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:31.170 10:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.170 [2024-11-12 10:38:19.898001] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:31.170 [2024-11-12 10:38:19.898066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.170 [2024-11-12 10:38:19.898083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.170 [2024-11-12 10:38:19.898095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.171 [2024-11-12 10:38:19.898105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.171 [2024-11-12 10:38:19.898115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.171 [2024-11-12 10:38:19.898125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.171 [2024-11-12 10:38:19.898135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.171 [2024-11-12 10:38:19.898144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.171 [2024-11-12 10:38:19.898154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.171 [2024-11-12 10:38:19.898163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.171 [2024-11-12 10:38:19.898173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7340 is same with the state(6) to be set 00:16:31.171 [2024-11-12 10:38:19.907997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7340 (9): Bad file descriptor 00:16:31.171 10:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:31.171 10:38:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:31.171 [2024-11-12 10:38:19.918016] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:16:31.171 [2024-11-12 10:38:19.918047] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:16:31.171 [2024-11-12 10:38:19.918059] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:31.171 [2024-11-12 10:38:19.918065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:31.171 [2024-11-12 10:38:19.918103] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:32.548 10:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.548 10:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.548 10:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.548 10:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.548 10:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.548 10:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:32.548 10:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.548 [2024-11-12 10:38:20.968274] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:32.548 [2024-11-12 10:38:20.968402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7340 with addr=10.0.0.3, port=4420 00:16:32.548 [2024-11-12 10:38:20.968439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7340 is same with the state(6) to be set 00:16:32.548 [2024-11-12 10:38:20.968512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7340 (9): Bad file descriptor 00:16:32.548 [2024-11-12 10:38:20.969431] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:16:32.548 [2024-11-12 10:38:20.969532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:32.548 [2024-11-12 10:38:20.969560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:32.548 [2024-11-12 10:38:20.969582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:32.548 [2024-11-12 10:38:20.969602] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:32.548 [2024-11-12 10:38:20.969616] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:32.548 [2024-11-12 10:38:20.969628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:32.548 [2024-11-12 10:38:20.969650] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:32.548 [2024-11-12 10:38:20.969662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:32.548 10:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.548 10:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:32.548 10:38:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:33.484 [2024-11-12 10:38:21.969741] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:33.484 [2024-11-12 10:38:21.969795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:33.484 [2024-11-12 10:38:21.969830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:33.484 [2024-11-12 10:38:21.969843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:33.484 [2024-11-12 10:38:21.969855] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:16:33.484 [2024-11-12 10:38:21.969867] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:33.484 [2024-11-12 10:38:21.969875] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:33.484 [2024-11-12 10:38:21.969882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:33.484 [2024-11-12 10:38:21.969919] bdev_nvme.c:7133:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:16:33.484 [2024-11-12 10:38:21.969969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.484 [2024-11-12 10:38:21.969987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.484 [2024-11-12 10:38:21.970002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.484 [2024-11-12 10:38:21.970014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.484 [2024-11-12 10:38:21.970027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.484 [2024-11-12 10:38:21.970045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.484 [2024-11-12 10:38:21.970058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.484 [2024-11-12 10:38:21.970068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.484 [2024-11-12 10:38:21.970081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.484 [2024-11-12 10:38:21.970091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.484 [2024-11-12 10:38:21.970103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:16:33.484 [2024-11-12 10:38:21.970674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1182b20 (9): Bad file descriptor 00:16:33.484 [2024-11-12 10:38:21.971693] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:33.484 [2024-11-12 10:38:21.971724] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:16:33.484 10:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.484 10:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.484 10:38:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:33.484 10:38:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:34.421 10:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:34.421 10:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:34.421 10:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.421 10:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:34.421 10:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:34.421 10:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:34.421 10:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:34.421 10:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.680 10:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:34.680 10:38:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:35.247 [2024-11-12 10:38:23.981757] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:35.247 [2024-11-12 10:38:23.981788] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:35.247 [2024-11-12 10:38:23.981809] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:35.247 [2024-11-12 10:38:23.987791] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:16:35.506 [2024-11-12 10:38:24.042128] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:16:35.506 [2024-11-12 10:38:24.042816] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x11d3d00:1 started. 00:16:35.506 [2024-11-12 10:38:24.044096] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:35.506 [2024-11-12 10:38:24.044144] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:35.506 [2024-11-12 10:38:24.044168] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:35.506 [2024-11-12 10:38:24.044197] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:16:35.506 [2024-11-12 10:38:24.044208] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:35.506 [2024-11-12 10:38:24.050326] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x11d3d00 was disconnected and freed. delete nvme_qpair. 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76864 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 76864 ']' 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 76864 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:35.506 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76864 00:16:35.765 killing process with pid 76864 00:16:35.765 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:35.765 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:35.765 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76864' 00:16:35.765 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 76864 00:16:35.765 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 76864 00:16:35.765 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:35.765 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:35.765 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:16:35.765 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:35.765 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:16:35.765 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:35.765 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:35.765 rmmod nvme_tcp 00:16:35.765 rmmod nvme_fabrics 00:16:36.024 rmmod nvme_keyring 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 76835 ']' 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 76835 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 76835 ']' 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 76835 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76835 00:16:36.024 killing process with pid 76835 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76835' 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 76835 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 76835 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:36.024 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:16:36.283 ************************************ 00:16:36.283 END TEST nvmf_discovery_remove_ifc 00:16:36.283 ************************************ 00:16:36.283 00:16:36.283 real 0m13.048s 00:16:36.283 user 0m22.114s 00:16:36.283 sys 0m2.433s 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:36.283 10:38:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:36.283 10:38:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:36.283 10:38:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:36.283 10:38:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:36.283 10:38:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.283 ************************************ 00:16:36.283 START TEST nvmf_identify_kernel_target 00:16:36.283 ************************************ 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:36.543 * Looking for test storage... 00:16:36.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:16:36.543 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:36.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.544 --rc genhtml_branch_coverage=1 00:16:36.544 --rc genhtml_function_coverage=1 00:16:36.544 --rc genhtml_legend=1 00:16:36.544 --rc geninfo_all_blocks=1 00:16:36.544 --rc geninfo_unexecuted_blocks=1 00:16:36.544 00:16:36.544 ' 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:36.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.544 --rc genhtml_branch_coverage=1 00:16:36.544 --rc genhtml_function_coverage=1 00:16:36.544 --rc genhtml_legend=1 00:16:36.544 --rc geninfo_all_blocks=1 00:16:36.544 --rc geninfo_unexecuted_blocks=1 00:16:36.544 00:16:36.544 ' 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:36.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.544 --rc genhtml_branch_coverage=1 00:16:36.544 --rc genhtml_function_coverage=1 00:16:36.544 --rc genhtml_legend=1 00:16:36.544 --rc geninfo_all_blocks=1 00:16:36.544 --rc geninfo_unexecuted_blocks=1 00:16:36.544 00:16:36.544 ' 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:36.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.544 --rc genhtml_branch_coverage=1 00:16:36.544 --rc genhtml_function_coverage=1 00:16:36.544 --rc genhtml_legend=1 00:16:36.544 --rc geninfo_all_blocks=1 00:16:36.544 --rc geninfo_unexecuted_blocks=1 00:16:36.544 00:16:36.544 ' 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:36.544 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.544 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:36.545 Cannot find device "nvmf_init_br" 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:36.545 Cannot find device "nvmf_init_br2" 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:36.545 Cannot find device "nvmf_tgt_br" 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:16:36.545 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.804 Cannot find device "nvmf_tgt_br2" 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:36.804 Cannot find device "nvmf_init_br" 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:36.804 Cannot find device "nvmf_init_br2" 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:36.804 Cannot find device "nvmf_tgt_br" 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:36.804 Cannot find device "nvmf_tgt_br2" 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:36.804 Cannot find device "nvmf_br" 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:36.804 Cannot find device "nvmf_init_if" 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:36.804 Cannot find device "nvmf_init_if2" 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:36.804 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:36.805 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:37.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:37.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:16:37.064 00:16:37.064 --- 10.0.0.3 ping statistics --- 00:16:37.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.064 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:37.064 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:37.064 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:16:37.064 00:16:37.064 --- 10.0.0.4 ping statistics --- 00:16:37.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.064 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:37.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:37.064 00:16:37.064 --- 10.0.0.1 ping statistics --- 00:16:37.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.064 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:37.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:16:37.064 00:16:37.064 --- 10.0.0.2 ping statistics --- 00:16:37.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.064 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.064 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:37.065 10:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:37.324 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:37.324 Waiting for block devices as requested 00:16:37.583 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:37.583 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:37.583 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:37.583 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:37.583 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:37.583 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:37.583 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:37.583 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:37.583 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:37.583 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:37.583 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:37.843 No valid GPT data, bailing 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:37.843 No valid GPT data, bailing 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:37.843 No valid GPT data, bailing 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:37.843 No valid GPT data, bailing 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:37.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:38.104 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:38.104 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:38.104 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:16:38.104 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:38.104 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:16:38.104 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:38.104 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:16:38.104 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:16:38.104 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:16:38.104 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:38.104 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid=96df7a2d-651c-49c0-b1c8-dd965eb48096 -a 10.0.0.1 -t tcp -s 4420 00:16:38.104 00:16:38.104 Discovery Log Number of Records 2, Generation counter 2 00:16:38.104 =====Discovery Log Entry 0====== 00:16:38.104 trtype: tcp 00:16:38.104 adrfam: ipv4 00:16:38.104 subtype: current discovery subsystem 00:16:38.104 treq: not specified, sq flow control disable supported 00:16:38.104 portid: 1 00:16:38.104 trsvcid: 4420 00:16:38.104 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:38.104 traddr: 10.0.0.1 00:16:38.104 eflags: none 00:16:38.104 sectype: none 00:16:38.104 =====Discovery Log Entry 1====== 00:16:38.104 trtype: tcp 00:16:38.104 adrfam: ipv4 00:16:38.104 subtype: nvme subsystem 00:16:38.104 treq: not specified, sq flow control disable supported 00:16:38.104 portid: 1 00:16:38.104 trsvcid: 4420 00:16:38.104 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:38.104 traddr: 10.0.0.1 00:16:38.104 eflags: none 00:16:38.104 sectype: none 00:16:38.104 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:38.104 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:38.104 ===================================================== 00:16:38.104 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:38.104 ===================================================== 00:16:38.104 Controller Capabilities/Features 00:16:38.104 ================================ 00:16:38.104 Vendor ID: 0000 00:16:38.104 Subsystem Vendor ID: 0000 00:16:38.104 Serial Number: a68c5cc7473a6066f2d9 00:16:38.104 Model Number: Linux 00:16:38.104 Firmware Version: 6.8.9-20 00:16:38.104 Recommended Arb Burst: 0 00:16:38.104 IEEE OUI Identifier: 00 00 00 00:16:38.104 Multi-path I/O 00:16:38.104 May have multiple subsystem ports: No 00:16:38.104 May have multiple controllers: No 00:16:38.104 Associated with SR-IOV VF: No 00:16:38.104 Max Data Transfer Size: Unlimited 00:16:38.104 Max Number of Namespaces: 0 00:16:38.104 Max Number of I/O Queues: 1024 00:16:38.104 NVMe Specification Version (VS): 1.3 00:16:38.104 NVMe Specification Version (Identify): 1.3 00:16:38.104 Maximum Queue Entries: 1024 00:16:38.104 Contiguous Queues Required: No 00:16:38.104 Arbitration Mechanisms Supported 00:16:38.104 Weighted Round Robin: Not Supported 00:16:38.104 Vendor Specific: Not Supported 00:16:38.104 Reset Timeout: 7500 ms 00:16:38.104 Doorbell Stride: 4 bytes 00:16:38.104 NVM Subsystem Reset: Not Supported 00:16:38.104 Command Sets Supported 00:16:38.104 NVM Command Set: Supported 00:16:38.104 Boot Partition: Not Supported 00:16:38.104 Memory Page Size Minimum: 4096 bytes 00:16:38.104 Memory Page Size Maximum: 4096 bytes 00:16:38.104 Persistent Memory Region: Not Supported 00:16:38.104 Optional Asynchronous Events Supported 00:16:38.104 Namespace Attribute Notices: Not Supported 00:16:38.104 Firmware Activation Notices: Not Supported 00:16:38.104 ANA Change Notices: Not Supported 00:16:38.104 PLE Aggregate Log Change Notices: Not Supported 00:16:38.104 LBA Status Info Alert Notices: Not Supported 00:16:38.104 EGE Aggregate Log Change Notices: Not Supported 00:16:38.104 Normal NVM Subsystem Shutdown event: Not Supported 00:16:38.104 Zone Descriptor Change Notices: Not Supported 00:16:38.104 Discovery Log Change Notices: Supported 00:16:38.104 Controller Attributes 00:16:38.104 128-bit Host Identifier: Not Supported 00:16:38.104 Non-Operational Permissive Mode: Not Supported 00:16:38.104 NVM Sets: Not Supported 00:16:38.104 Read Recovery Levels: Not Supported 00:16:38.104 Endurance Groups: Not Supported 00:16:38.104 Predictable Latency Mode: Not Supported 00:16:38.104 Traffic Based Keep ALive: Not Supported 00:16:38.104 Namespace Granularity: Not Supported 00:16:38.104 SQ Associations: Not Supported 00:16:38.104 UUID List: Not Supported 00:16:38.104 Multi-Domain Subsystem: Not Supported 00:16:38.104 Fixed Capacity Management: Not Supported 00:16:38.104 Variable Capacity Management: Not Supported 00:16:38.104 Delete Endurance Group: Not Supported 00:16:38.104 Delete NVM Set: Not Supported 00:16:38.104 Extended LBA Formats Supported: Not Supported 00:16:38.104 Flexible Data Placement Supported: Not Supported 00:16:38.104 00:16:38.104 Controller Memory Buffer Support 00:16:38.104 ================================ 00:16:38.104 Supported: No 00:16:38.104 00:16:38.104 Persistent Memory Region Support 00:16:38.104 ================================ 00:16:38.104 Supported: No 00:16:38.104 00:16:38.104 Admin Command Set Attributes 00:16:38.104 ============================ 00:16:38.104 Security Send/Receive: Not Supported 00:16:38.104 Format NVM: Not Supported 00:16:38.104 Firmware Activate/Download: Not Supported 00:16:38.104 Namespace Management: Not Supported 00:16:38.104 Device Self-Test: Not Supported 00:16:38.104 Directives: Not Supported 00:16:38.104 NVMe-MI: Not Supported 00:16:38.104 Virtualization Management: Not Supported 00:16:38.104 Doorbell Buffer Config: Not Supported 00:16:38.104 Get LBA Status Capability: Not Supported 00:16:38.104 Command & Feature Lockdown Capability: Not Supported 00:16:38.104 Abort Command Limit: 1 00:16:38.104 Async Event Request Limit: 1 00:16:38.104 Number of Firmware Slots: N/A 00:16:38.104 Firmware Slot 1 Read-Only: N/A 00:16:38.104 Firmware Activation Without Reset: N/A 00:16:38.104 Multiple Update Detection Support: N/A 00:16:38.104 Firmware Update Granularity: No Information Provided 00:16:38.104 Per-Namespace SMART Log: No 00:16:38.104 Asymmetric Namespace Access Log Page: Not Supported 00:16:38.104 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:38.104 Command Effects Log Page: Not Supported 00:16:38.104 Get Log Page Extended Data: Supported 00:16:38.104 Telemetry Log Pages: Not Supported 00:16:38.104 Persistent Event Log Pages: Not Supported 00:16:38.104 Supported Log Pages Log Page: May Support 00:16:38.104 Commands Supported & Effects Log Page: Not Supported 00:16:38.104 Feature Identifiers & Effects Log Page:May Support 00:16:38.104 NVMe-MI Commands & Effects Log Page: May Support 00:16:38.104 Data Area 4 for Telemetry Log: Not Supported 00:16:38.104 Error Log Page Entries Supported: 1 00:16:38.104 Keep Alive: Not Supported 00:16:38.104 00:16:38.104 NVM Command Set Attributes 00:16:38.104 ========================== 00:16:38.104 Submission Queue Entry Size 00:16:38.104 Max: 1 00:16:38.104 Min: 1 00:16:38.104 Completion Queue Entry Size 00:16:38.104 Max: 1 00:16:38.104 Min: 1 00:16:38.104 Number of Namespaces: 0 00:16:38.104 Compare Command: Not Supported 00:16:38.104 Write Uncorrectable Command: Not Supported 00:16:38.104 Dataset Management Command: Not Supported 00:16:38.104 Write Zeroes Command: Not Supported 00:16:38.104 Set Features Save Field: Not Supported 00:16:38.104 Reservations: Not Supported 00:16:38.104 Timestamp: Not Supported 00:16:38.104 Copy: Not Supported 00:16:38.104 Volatile Write Cache: Not Present 00:16:38.104 Atomic Write Unit (Normal): 1 00:16:38.104 Atomic Write Unit (PFail): 1 00:16:38.105 Atomic Compare & Write Unit: 1 00:16:38.105 Fused Compare & Write: Not Supported 00:16:38.105 Scatter-Gather List 00:16:38.105 SGL Command Set: Supported 00:16:38.105 SGL Keyed: Not Supported 00:16:38.105 SGL Bit Bucket Descriptor: Not Supported 00:16:38.105 SGL Metadata Pointer: Not Supported 00:16:38.105 Oversized SGL: Not Supported 00:16:38.105 SGL Metadata Address: Not Supported 00:16:38.105 SGL Offset: Supported 00:16:38.105 Transport SGL Data Block: Not Supported 00:16:38.105 Replay Protected Memory Block: Not Supported 00:16:38.105 00:16:38.105 Firmware Slot Information 00:16:38.105 ========================= 00:16:38.105 Active slot: 0 00:16:38.105 00:16:38.105 00:16:38.105 Error Log 00:16:38.105 ========= 00:16:38.105 00:16:38.105 Active Namespaces 00:16:38.105 ================= 00:16:38.105 Discovery Log Page 00:16:38.105 ================== 00:16:38.105 Generation Counter: 2 00:16:38.105 Number of Records: 2 00:16:38.105 Record Format: 0 00:16:38.105 00:16:38.105 Discovery Log Entry 0 00:16:38.105 ---------------------- 00:16:38.105 Transport Type: 3 (TCP) 00:16:38.105 Address Family: 1 (IPv4) 00:16:38.105 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:38.105 Entry Flags: 00:16:38.105 Duplicate Returned Information: 0 00:16:38.105 Explicit Persistent Connection Support for Discovery: 0 00:16:38.105 Transport Requirements: 00:16:38.105 Secure Channel: Not Specified 00:16:38.105 Port ID: 1 (0x0001) 00:16:38.105 Controller ID: 65535 (0xffff) 00:16:38.105 Admin Max SQ Size: 32 00:16:38.105 Transport Service Identifier: 4420 00:16:38.105 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:38.105 Transport Address: 10.0.0.1 00:16:38.105 Discovery Log Entry 1 00:16:38.105 ---------------------- 00:16:38.105 Transport Type: 3 (TCP) 00:16:38.105 Address Family: 1 (IPv4) 00:16:38.105 Subsystem Type: 2 (NVM Subsystem) 00:16:38.105 Entry Flags: 00:16:38.105 Duplicate Returned Information: 0 00:16:38.105 Explicit Persistent Connection Support for Discovery: 0 00:16:38.105 Transport Requirements: 00:16:38.105 Secure Channel: Not Specified 00:16:38.105 Port ID: 1 (0x0001) 00:16:38.105 Controller ID: 65535 (0xffff) 00:16:38.105 Admin Max SQ Size: 32 00:16:38.105 Transport Service Identifier: 4420 00:16:38.105 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:38.105 Transport Address: 10.0.0.1 00:16:38.105 10:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:38.364 get_feature(0x01) failed 00:16:38.364 get_feature(0x02) failed 00:16:38.364 get_feature(0x04) failed 00:16:38.364 ===================================================== 00:16:38.364 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:38.364 ===================================================== 00:16:38.364 Controller Capabilities/Features 00:16:38.364 ================================ 00:16:38.364 Vendor ID: 0000 00:16:38.364 Subsystem Vendor ID: 0000 00:16:38.364 Serial Number: 95cacec57333db6501fd 00:16:38.364 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:38.364 Firmware Version: 6.8.9-20 00:16:38.365 Recommended Arb Burst: 6 00:16:38.365 IEEE OUI Identifier: 00 00 00 00:16:38.365 Multi-path I/O 00:16:38.365 May have multiple subsystem ports: Yes 00:16:38.365 May have multiple controllers: Yes 00:16:38.365 Associated with SR-IOV VF: No 00:16:38.365 Max Data Transfer Size: Unlimited 00:16:38.365 Max Number of Namespaces: 1024 00:16:38.365 Max Number of I/O Queues: 128 00:16:38.365 NVMe Specification Version (VS): 1.3 00:16:38.365 NVMe Specification Version (Identify): 1.3 00:16:38.365 Maximum Queue Entries: 1024 00:16:38.365 Contiguous Queues Required: No 00:16:38.365 Arbitration Mechanisms Supported 00:16:38.365 Weighted Round Robin: Not Supported 00:16:38.365 Vendor Specific: Not Supported 00:16:38.365 Reset Timeout: 7500 ms 00:16:38.365 Doorbell Stride: 4 bytes 00:16:38.365 NVM Subsystem Reset: Not Supported 00:16:38.365 Command Sets Supported 00:16:38.365 NVM Command Set: Supported 00:16:38.365 Boot Partition: Not Supported 00:16:38.365 Memory Page Size Minimum: 4096 bytes 00:16:38.365 Memory Page Size Maximum: 4096 bytes 00:16:38.365 Persistent Memory Region: Not Supported 00:16:38.365 Optional Asynchronous Events Supported 00:16:38.365 Namespace Attribute Notices: Supported 00:16:38.365 Firmware Activation Notices: Not Supported 00:16:38.365 ANA Change Notices: Supported 00:16:38.365 PLE Aggregate Log Change Notices: Not Supported 00:16:38.365 LBA Status Info Alert Notices: Not Supported 00:16:38.365 EGE Aggregate Log Change Notices: Not Supported 00:16:38.365 Normal NVM Subsystem Shutdown event: Not Supported 00:16:38.365 Zone Descriptor Change Notices: Not Supported 00:16:38.365 Discovery Log Change Notices: Not Supported 00:16:38.365 Controller Attributes 00:16:38.365 128-bit Host Identifier: Supported 00:16:38.365 Non-Operational Permissive Mode: Not Supported 00:16:38.365 NVM Sets: Not Supported 00:16:38.365 Read Recovery Levels: Not Supported 00:16:38.365 Endurance Groups: Not Supported 00:16:38.365 Predictable Latency Mode: Not Supported 00:16:38.365 Traffic Based Keep ALive: Supported 00:16:38.365 Namespace Granularity: Not Supported 00:16:38.365 SQ Associations: Not Supported 00:16:38.365 UUID List: Not Supported 00:16:38.365 Multi-Domain Subsystem: Not Supported 00:16:38.365 Fixed Capacity Management: Not Supported 00:16:38.365 Variable Capacity Management: Not Supported 00:16:38.365 Delete Endurance Group: Not Supported 00:16:38.365 Delete NVM Set: Not Supported 00:16:38.365 Extended LBA Formats Supported: Not Supported 00:16:38.365 Flexible Data Placement Supported: Not Supported 00:16:38.365 00:16:38.365 Controller Memory Buffer Support 00:16:38.365 ================================ 00:16:38.365 Supported: No 00:16:38.365 00:16:38.365 Persistent Memory Region Support 00:16:38.365 ================================ 00:16:38.365 Supported: No 00:16:38.365 00:16:38.365 Admin Command Set Attributes 00:16:38.365 ============================ 00:16:38.365 Security Send/Receive: Not Supported 00:16:38.365 Format NVM: Not Supported 00:16:38.365 Firmware Activate/Download: Not Supported 00:16:38.365 Namespace Management: Not Supported 00:16:38.365 Device Self-Test: Not Supported 00:16:38.365 Directives: Not Supported 00:16:38.365 NVMe-MI: Not Supported 00:16:38.365 Virtualization Management: Not Supported 00:16:38.365 Doorbell Buffer Config: Not Supported 00:16:38.365 Get LBA Status Capability: Not Supported 00:16:38.365 Command & Feature Lockdown Capability: Not Supported 00:16:38.365 Abort Command Limit: 4 00:16:38.365 Async Event Request Limit: 4 00:16:38.365 Number of Firmware Slots: N/A 00:16:38.365 Firmware Slot 1 Read-Only: N/A 00:16:38.365 Firmware Activation Without Reset: N/A 00:16:38.365 Multiple Update Detection Support: N/A 00:16:38.365 Firmware Update Granularity: No Information Provided 00:16:38.365 Per-Namespace SMART Log: Yes 00:16:38.365 Asymmetric Namespace Access Log Page: Supported 00:16:38.365 ANA Transition Time : 10 sec 00:16:38.365 00:16:38.365 Asymmetric Namespace Access Capabilities 00:16:38.365 ANA Optimized State : Supported 00:16:38.365 ANA Non-Optimized State : Supported 00:16:38.365 ANA Inaccessible State : Supported 00:16:38.365 ANA Persistent Loss State : Supported 00:16:38.365 ANA Change State : Supported 00:16:38.365 ANAGRPID is not changed : No 00:16:38.365 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:38.365 00:16:38.365 ANA Group Identifier Maximum : 128 00:16:38.365 Number of ANA Group Identifiers : 128 00:16:38.365 Max Number of Allowed Namespaces : 1024 00:16:38.365 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:38.365 Command Effects Log Page: Supported 00:16:38.365 Get Log Page Extended Data: Supported 00:16:38.365 Telemetry Log Pages: Not Supported 00:16:38.365 Persistent Event Log Pages: Not Supported 00:16:38.365 Supported Log Pages Log Page: May Support 00:16:38.365 Commands Supported & Effects Log Page: Not Supported 00:16:38.365 Feature Identifiers & Effects Log Page:May Support 00:16:38.365 NVMe-MI Commands & Effects Log Page: May Support 00:16:38.365 Data Area 4 for Telemetry Log: Not Supported 00:16:38.365 Error Log Page Entries Supported: 128 00:16:38.365 Keep Alive: Supported 00:16:38.365 Keep Alive Granularity: 1000 ms 00:16:38.365 00:16:38.365 NVM Command Set Attributes 00:16:38.365 ========================== 00:16:38.365 Submission Queue Entry Size 00:16:38.365 Max: 64 00:16:38.365 Min: 64 00:16:38.365 Completion Queue Entry Size 00:16:38.365 Max: 16 00:16:38.365 Min: 16 00:16:38.365 Number of Namespaces: 1024 00:16:38.365 Compare Command: Not Supported 00:16:38.365 Write Uncorrectable Command: Not Supported 00:16:38.365 Dataset Management Command: Supported 00:16:38.365 Write Zeroes Command: Supported 00:16:38.365 Set Features Save Field: Not Supported 00:16:38.365 Reservations: Not Supported 00:16:38.365 Timestamp: Not Supported 00:16:38.365 Copy: Not Supported 00:16:38.365 Volatile Write Cache: Present 00:16:38.365 Atomic Write Unit (Normal): 1 00:16:38.365 Atomic Write Unit (PFail): 1 00:16:38.365 Atomic Compare & Write Unit: 1 00:16:38.365 Fused Compare & Write: Not Supported 00:16:38.365 Scatter-Gather List 00:16:38.365 SGL Command Set: Supported 00:16:38.365 SGL Keyed: Not Supported 00:16:38.365 SGL Bit Bucket Descriptor: Not Supported 00:16:38.365 SGL Metadata Pointer: Not Supported 00:16:38.365 Oversized SGL: Not Supported 00:16:38.365 SGL Metadata Address: Not Supported 00:16:38.365 SGL Offset: Supported 00:16:38.365 Transport SGL Data Block: Not Supported 00:16:38.365 Replay Protected Memory Block: Not Supported 00:16:38.365 00:16:38.365 Firmware Slot Information 00:16:38.365 ========================= 00:16:38.365 Active slot: 0 00:16:38.365 00:16:38.365 Asymmetric Namespace Access 00:16:38.365 =========================== 00:16:38.365 Change Count : 0 00:16:38.365 Number of ANA Group Descriptors : 1 00:16:38.365 ANA Group Descriptor : 0 00:16:38.365 ANA Group ID : 1 00:16:38.365 Number of NSID Values : 1 00:16:38.365 Change Count : 0 00:16:38.365 ANA State : 1 00:16:38.365 Namespace Identifier : 1 00:16:38.365 00:16:38.365 Commands Supported and Effects 00:16:38.365 ============================== 00:16:38.365 Admin Commands 00:16:38.365 -------------- 00:16:38.365 Get Log Page (02h): Supported 00:16:38.365 Identify (06h): Supported 00:16:38.365 Abort (08h): Supported 00:16:38.365 Set Features (09h): Supported 00:16:38.365 Get Features (0Ah): Supported 00:16:38.365 Asynchronous Event Request (0Ch): Supported 00:16:38.365 Keep Alive (18h): Supported 00:16:38.365 I/O Commands 00:16:38.365 ------------ 00:16:38.365 Flush (00h): Supported 00:16:38.365 Write (01h): Supported LBA-Change 00:16:38.365 Read (02h): Supported 00:16:38.365 Write Zeroes (08h): Supported LBA-Change 00:16:38.365 Dataset Management (09h): Supported 00:16:38.365 00:16:38.365 Error Log 00:16:38.365 ========= 00:16:38.365 Entry: 0 00:16:38.365 Error Count: 0x3 00:16:38.365 Submission Queue Id: 0x0 00:16:38.365 Command Id: 0x5 00:16:38.365 Phase Bit: 0 00:16:38.365 Status Code: 0x2 00:16:38.365 Status Code Type: 0x0 00:16:38.365 Do Not Retry: 1 00:16:38.365 Error Location: 0x28 00:16:38.365 LBA: 0x0 00:16:38.365 Namespace: 0x0 00:16:38.365 Vendor Log Page: 0x0 00:16:38.365 ----------- 00:16:38.365 Entry: 1 00:16:38.365 Error Count: 0x2 00:16:38.365 Submission Queue Id: 0x0 00:16:38.365 Command Id: 0x5 00:16:38.365 Phase Bit: 0 00:16:38.365 Status Code: 0x2 00:16:38.365 Status Code Type: 0x0 00:16:38.365 Do Not Retry: 1 00:16:38.365 Error Location: 0x28 00:16:38.365 LBA: 0x0 00:16:38.365 Namespace: 0x0 00:16:38.365 Vendor Log Page: 0x0 00:16:38.365 ----------- 00:16:38.365 Entry: 2 00:16:38.365 Error Count: 0x1 00:16:38.366 Submission Queue Id: 0x0 00:16:38.366 Command Id: 0x4 00:16:38.366 Phase Bit: 0 00:16:38.366 Status Code: 0x2 00:16:38.366 Status Code Type: 0x0 00:16:38.366 Do Not Retry: 1 00:16:38.366 Error Location: 0x28 00:16:38.366 LBA: 0x0 00:16:38.366 Namespace: 0x0 00:16:38.366 Vendor Log Page: 0x0 00:16:38.366 00:16:38.366 Number of Queues 00:16:38.366 ================ 00:16:38.366 Number of I/O Submission Queues: 128 00:16:38.366 Number of I/O Completion Queues: 128 00:16:38.366 00:16:38.366 ZNS Specific Controller Data 00:16:38.366 ============================ 00:16:38.366 Zone Append Size Limit: 0 00:16:38.366 00:16:38.366 00:16:38.366 Active Namespaces 00:16:38.366 ================= 00:16:38.366 get_feature(0x05) failed 00:16:38.366 Namespace ID:1 00:16:38.366 Command Set Identifier: NVM (00h) 00:16:38.366 Deallocate: Supported 00:16:38.366 Deallocated/Unwritten Error: Not Supported 00:16:38.366 Deallocated Read Value: Unknown 00:16:38.366 Deallocate in Write Zeroes: Not Supported 00:16:38.366 Deallocated Guard Field: 0xFFFF 00:16:38.366 Flush: Supported 00:16:38.366 Reservation: Not Supported 00:16:38.366 Namespace Sharing Capabilities: Multiple Controllers 00:16:38.366 Size (in LBAs): 1310720 (5GiB) 00:16:38.366 Capacity (in LBAs): 1310720 (5GiB) 00:16:38.366 Utilization (in LBAs): 1310720 (5GiB) 00:16:38.366 UUID: d6459651-9cff-4c7f-be53-bbe98a44b35c 00:16:38.366 Thin Provisioning: Not Supported 00:16:38.366 Per-NS Atomic Units: Yes 00:16:38.366 Atomic Boundary Size (Normal): 0 00:16:38.366 Atomic Boundary Size (PFail): 0 00:16:38.366 Atomic Boundary Offset: 0 00:16:38.366 NGUID/EUI64 Never Reused: No 00:16:38.366 ANA group ID: 1 00:16:38.366 Namespace Write Protected: No 00:16:38.366 Number of LBA Formats: 1 00:16:38.366 Current LBA Format: LBA Format #00 00:16:38.366 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:38.366 00:16:38.366 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:38.366 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:38.366 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:16:38.366 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:38.366 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:16:38.366 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:38.366 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:38.366 rmmod nvme_tcp 00:16:38.366 rmmod nvme_fabrics 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:38.625 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:16:38.885 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:38.885 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:38.885 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:38.885 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:38.885 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:16:38.885 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:16:38.885 10:38:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:39.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:39.451 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:39.710 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:39.710 00:16:39.710 real 0m3.235s 00:16:39.710 user 0m1.188s 00:16:39.710 sys 0m1.385s 00:16:39.710 10:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:39.710 10:38:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.710 ************************************ 00:16:39.710 END TEST nvmf_identify_kernel_target 00:16:39.710 ************************************ 00:16:39.710 10:38:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:39.710 10:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:39.710 10:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:39.710 10:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.710 ************************************ 00:16:39.710 START TEST nvmf_auth_host 00:16:39.710 ************************************ 00:16:39.710 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:39.710 * Looking for test storage... 00:16:39.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:39.710 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:39.710 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:16:39.710 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:16:39.970 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:39.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.971 --rc genhtml_branch_coverage=1 00:16:39.971 --rc genhtml_function_coverage=1 00:16:39.971 --rc genhtml_legend=1 00:16:39.971 --rc geninfo_all_blocks=1 00:16:39.971 --rc geninfo_unexecuted_blocks=1 00:16:39.971 00:16:39.971 ' 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:39.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.971 --rc genhtml_branch_coverage=1 00:16:39.971 --rc genhtml_function_coverage=1 00:16:39.971 --rc genhtml_legend=1 00:16:39.971 --rc geninfo_all_blocks=1 00:16:39.971 --rc geninfo_unexecuted_blocks=1 00:16:39.971 00:16:39.971 ' 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:39.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.971 --rc genhtml_branch_coverage=1 00:16:39.971 --rc genhtml_function_coverage=1 00:16:39.971 --rc genhtml_legend=1 00:16:39.971 --rc geninfo_all_blocks=1 00:16:39.971 --rc geninfo_unexecuted_blocks=1 00:16:39.971 00:16:39.971 ' 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:39.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.971 --rc genhtml_branch_coverage=1 00:16:39.971 --rc genhtml_function_coverage=1 00:16:39.971 --rc genhtml_legend=1 00:16:39.971 --rc geninfo_all_blocks=1 00:16:39.971 --rc geninfo_unexecuted_blocks=1 00:16:39.971 00:16:39.971 ' 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:39.971 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:39.971 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:39.972 Cannot find device "nvmf_init_br" 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:39.972 Cannot find device "nvmf_init_br2" 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:39.972 Cannot find device "nvmf_tgt_br" 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:39.972 Cannot find device "nvmf_tgt_br2" 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:39.972 Cannot find device "nvmf_init_br" 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:39.972 Cannot find device "nvmf_init_br2" 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:39.972 Cannot find device "nvmf_tgt_br" 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:39.972 Cannot find device "nvmf_tgt_br2" 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:39.972 Cannot find device "nvmf_br" 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:39.972 Cannot find device "nvmf_init_if" 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:39.972 Cannot find device "nvmf_init_if2" 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:39.972 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:40.230 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:40.230 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.230 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:40.230 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:40.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:40.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:40.231 00:16:40.231 --- 10.0.0.3 ping statistics --- 00:16:40.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.231 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:40.231 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:40.231 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:16:40.231 00:16:40.231 --- 10.0.0.4 ping statistics --- 00:16:40.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.231 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:40.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:40.231 00:16:40.231 --- 10.0.0.1 ping statistics --- 00:16:40.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.231 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:40.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:16:40.231 00:16:40.231 --- 10.0.0.2 ping statistics --- 00:16:40.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.231 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=77842 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 77842 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 77842 ']' 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:40.231 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3d1712eb86851f8276bd4f6c60021b94 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.VLI 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3d1712eb86851f8276bd4f6c60021b94 0 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3d1712eb86851f8276bd4f6c60021b94 0 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3d1712eb86851f8276bd4f6c60021b94 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.VLI 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.VLI 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.VLI 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0d0958abdb86ba334e33bff3fa54406c6a18237063648e662fafb0198a437ef8 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.wXF 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0d0958abdb86ba334e33bff3fa54406c6a18237063648e662fafb0198a437ef8 3 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0d0958abdb86ba334e33bff3fa54406c6a18237063648e662fafb0198a437ef8 3 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0d0958abdb86ba334e33bff3fa54406c6a18237063648e662fafb0198a437ef8 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.wXF 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.wXF 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.wXF 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5ad6c6d1f6e432023b82fb7620195231b016f0a16526c3c9 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.dBp 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5ad6c6d1f6e432023b82fb7620195231b016f0a16526c3c9 0 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5ad6c6d1f6e432023b82fb7620195231b016f0a16526c3c9 0 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5ad6c6d1f6e432023b82fb7620195231b016f0a16526c3c9 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.dBp 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.dBp 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.dBp 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:41.608 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5055da744828bd3060dd619657d1fcf3506e1b7475ea5914 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9DH 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5055da744828bd3060dd619657d1fcf3506e1b7475ea5914 2 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5055da744828bd3060dd619657d1fcf3506e1b7475ea5914 2 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5055da744828bd3060dd619657d1fcf3506e1b7475ea5914 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9DH 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9DH 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.9DH 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=afdbbd9ea32ecc092abf075c67acf582 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xAB 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key afdbbd9ea32ecc092abf075c67acf582 1 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 afdbbd9ea32ecc092abf075c67acf582 1 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=afdbbd9ea32ecc092abf075c67acf582 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:41.609 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xAB 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xAB 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.xAB 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7707f693247462942c4a5658c8213606 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jtE 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7707f693247462942c4a5658c8213606 1 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7707f693247462942c4a5658c8213606 1 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7707f693247462942c4a5658c8213606 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jtE 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jtE 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.jtE 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c027c12aecea844557c55493150a21a4a3446367e258a749 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.frx 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c027c12aecea844557c55493150a21a4a3446367e258a749 2 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c027c12aecea844557c55493150a21a4a3446367e258a749 2 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:41.868 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c027c12aecea844557c55493150a21a4a3446367e258a749 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.frx 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.frx 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.frx 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c716e72ead3de49ad75074096eb23c05 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yBP 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c716e72ead3de49ad75074096eb23c05 0 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c716e72ead3de49ad75074096eb23c05 0 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c716e72ead3de49ad75074096eb23c05 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yBP 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yBP 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.yBP 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=145df623f7d74d20c5b64f1c0c68994f374629d34f13956536e40b925e0a35e1 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gFo 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 145df623f7d74d20c5b64f1c0c68994f374629d34f13956536e40b925e0a35e1 3 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 145df623f7d74d20c5b64f1c0c68994f374629d34f13956536e40b925e0a35e1 3 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=145df623f7d74d20c5b64f1c0c68994f374629d34f13956536e40b925e0a35e1 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:41.869 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:42.128 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gFo 00:16:42.128 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gFo 00:16:42.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.128 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.gFo 00:16:42.128 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:42.128 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77842 00:16:42.128 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 77842 ']' 00:16:42.128 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.128 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:42.128 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.128 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:42.128 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.VLI 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.wXF ]] 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wXF 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.dBp 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.9DH ]] 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9DH 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.388 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.xAB 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.jtE ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jtE 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.frx 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.yBP ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.yBP 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.gFo 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:42.388 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:42.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:42.956 Waiting for block devices as requested 00:16:42.956 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:42.956 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:43.524 No valid GPT data, bailing 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:43.524 No valid GPT data, bailing 00:16:43.524 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:43.783 No valid GPT data, bailing 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:43.783 No valid GPT data, bailing 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:43.783 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid=96df7a2d-651c-49c0-b1c8-dd965eb48096 -a 10.0.0.1 -t tcp -s 4420 00:16:43.783 00:16:43.783 Discovery Log Number of Records 2, Generation counter 2 00:16:43.783 =====Discovery Log Entry 0====== 00:16:43.783 trtype: tcp 00:16:43.783 adrfam: ipv4 00:16:43.783 subtype: current discovery subsystem 00:16:43.783 treq: not specified, sq flow control disable supported 00:16:43.783 portid: 1 00:16:43.783 trsvcid: 4420 00:16:43.784 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:43.784 traddr: 10.0.0.1 00:16:43.784 eflags: none 00:16:43.784 sectype: none 00:16:43.784 =====Discovery Log Entry 1====== 00:16:43.784 trtype: tcp 00:16:43.784 adrfam: ipv4 00:16:43.784 subtype: nvme subsystem 00:16:43.784 treq: not specified, sq flow control disable supported 00:16:43.784 portid: 1 00:16:43.784 trsvcid: 4420 00:16:43.784 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:43.784 traddr: 10.0.0.1 00:16:43.784 eflags: none 00:16:43.784 sectype: none 00:16:43.784 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:43.784 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:43.784 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:43.784 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:43.784 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.784 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.784 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:43.784 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:43.784 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:43.784 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:43.784 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.784 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.043 nvme0n1 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.043 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.302 nvme0n1 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.302 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.302 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.561 nvme0n1 00:16:44.561 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.561 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.561 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.561 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.561 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.561 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.561 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.561 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.561 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.562 nvme0n1 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.562 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.821 nvme0n1 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.821 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.822 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.081 nvme0n1 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.081 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.340 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 nvme0n1 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.599 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.858 nvme0n1 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.858 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 nvme0n1 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 nvme0n1 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.118 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.377 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.378 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.378 nvme0n1 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:46.378 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.944 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.214 nvme0n1 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:47.214 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.215 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.514 nvme0n1 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.514 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.823 nvme0n1 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.823 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.082 nvme0n1 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.082 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.341 nvme0n1 00:16:48.341 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.341 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.341 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.341 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.341 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.341 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.341 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.341 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.341 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.341 10:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.341 10:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.341 10:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.341 10:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.341 10:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:48.341 10:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.341 10:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.341 10:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:48.341 10:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:48.341 10:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:48.341 10:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:48.341 10:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.341 10:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.244 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.503 nvme0n1 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.503 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.504 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.504 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.504 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.504 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.504 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.504 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.504 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.504 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.504 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.504 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.504 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.762 nvme0n1 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:50.762 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.763 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.763 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.763 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.763 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.021 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.279 nvme0n1 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.280 10:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.538 nvme0n1 00:16:51.538 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.538 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.538 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.538 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.538 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.796 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.797 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.055 nvme0n1 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:52.055 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.056 10:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.623 nvme0n1 00:16:52.623 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.623 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.623 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.623 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.623 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.623 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.623 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.623 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.623 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.623 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.883 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.451 nvme0n1 00:16:53.451 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.451 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.451 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.451 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.451 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.451 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.451 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:53.452 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:53.452 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:53.452 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.452 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.452 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:53.452 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.452 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:53.452 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:53.452 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:53.452 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.452 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.452 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.020 nvme0n1 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.020 10:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.587 nvme0n1 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.587 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.154 nvme0n1 00:16:55.154 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.154 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.154 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.154 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.154 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.413 10:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.413 nvme0n1 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.413 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.672 nvme0n1 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.672 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.673 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.932 nvme0n1 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.932 nvme0n1 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.932 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.191 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 nvme0n1 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.192 10:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.451 nvme0n1 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.451 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.452 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.452 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.452 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.711 nvme0n1 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.711 nvme0n1 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.711 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.712 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.971 nvme0n1 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.971 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:57.230 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.231 nvme0n1 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.231 10:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.489 nvme0n1 00:16:57.489 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.489 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.489 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.489 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.489 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.489 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.489 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.489 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.489 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.489 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.489 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.490 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.749 nvme0n1 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.749 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.008 nvme0n1 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:58.008 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.009 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:58.009 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:58.009 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:58.009 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.009 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:58.009 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.009 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.267 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.267 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.268 nvme0n1 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.268 10:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.527 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.787 nvme0n1 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.787 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.046 nvme0n1 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.046 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.305 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.305 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.305 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.305 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.305 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.305 10:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.564 nvme0n1 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.564 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.823 nvme0n1 00:16:59.823 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.823 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.823 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.823 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.823 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.823 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.083 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.343 nvme0n1 00:17:00.343 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.343 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.343 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.343 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.343 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.343 10:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.343 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.912 nvme0n1 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.912 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.913 10:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.479 nvme0n1 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.479 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.047 nvme0n1 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.047 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.306 10:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.874 nvme0n1 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.875 10:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.442 nvme0n1 00:17:03.442 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.442 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.442 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.442 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.442 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.442 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.442 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.442 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.442 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.442 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.700 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.701 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.701 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.701 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.701 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:03.701 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.701 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.268 nvme0n1 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.268 nvme0n1 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.268 10:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.527 nvme0n1 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.527 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.528 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.788 nvme0n1 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.788 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:04.789 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.789 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.789 nvme0n1 00:17:04.789 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.789 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.789 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.789 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.789 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.789 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.057 nvme0n1 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:05.057 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.058 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.328 nvme0n1 00:17:05.328 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.328 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.328 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.328 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.328 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.328 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.329 10:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.588 nvme0n1 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.588 nvme0n1 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.588 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.848 nvme0n1 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.848 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.108 nvme0n1 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.108 10:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.367 nvme0n1 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:06.367 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.368 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.627 nvme0n1 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.627 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.886 nvme0n1 00:17:06.886 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.886 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.886 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.886 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.886 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.886 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.886 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.886 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.886 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.886 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.146 nvme0n1 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.146 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:07.405 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.406 10:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.406 nvme0n1 00:17:07.406 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.406 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.406 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.406 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.406 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:17:07.664 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.665 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.924 nvme0n1 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:17:07.924 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.925 10:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.493 nvme0n1 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.493 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.752 nvme0n1 00:17:08.752 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.752 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.752 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.752 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.752 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.752 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.011 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.271 nvme0n1 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.271 10:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.839 nvme0n1 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2QxNzEyZWI4Njg1MWY4Mjc2YmQ0ZjZjNjAwMjFiOTTm6sSn: 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: ]] 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQwOTU4YWJkYjg2YmEzMzRlMzNiZmYzZmE1NDQwNmM2YTE4MjM3MDYzNjQ4ZTY2MmZhZmIwMTk4YTQzN2VmOLloiqo=: 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.839 10:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.410 nvme0n1 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.410 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.411 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.411 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.411 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.411 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.411 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.411 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.411 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.411 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.411 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.346 nvme0n1 00:17:11.346 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.346 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.347 10:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.913 nvme0n1 00:17:11.913 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzAyN2MxMmFlY2VhODQ0NTU3YzU1NDkzMTUwYTIxYTRhMzQ0NjM2N2UyNThhNzQ51JUM7A==: 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: ]] 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzcxNmU3MmVhZDNkZTQ5YWQ3NTA3NDA5NmViMjNjMDX49CMO: 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.914 10:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.482 nvme0n1 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1ZGY2MjNmN2Q3NGQyMGM1YjY0ZjFjMGM2ODk5NGYzNzQ2MjlkMzRmMTM5NTY1MzZlNDBiOTI1ZTBhMzVlMTRp1so=: 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.482 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.421 nvme0n1 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.421 request: 00:17:13.421 { 00:17:13.421 "name": "nvme0", 00:17:13.421 "trtype": "tcp", 00:17:13.421 "traddr": "10.0.0.1", 00:17:13.421 "adrfam": "ipv4", 00:17:13.421 "trsvcid": "4420", 00:17:13.421 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:13.421 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:13.421 "prchk_reftag": false, 00:17:13.421 "prchk_guard": false, 00:17:13.421 "hdgst": false, 00:17:13.421 "ddgst": false, 00:17:13.421 "allow_unrecognized_csi": false, 00:17:13.421 "method": "bdev_nvme_attach_controller", 00:17:13.421 "req_id": 1 00:17:13.421 } 00:17:13.421 Got JSON-RPC error response 00:17:13.421 response: 00:17:13.421 { 00:17:13.421 "code": -5, 00:17:13.421 "message": "Input/output error" 00:17:13.421 } 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:13.421 10:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:13.421 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:13.421 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:13.421 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.421 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.422 request: 00:17:13.422 { 00:17:13.422 "name": "nvme0", 00:17:13.422 "trtype": "tcp", 00:17:13.422 "traddr": "10.0.0.1", 00:17:13.422 "adrfam": "ipv4", 00:17:13.422 "trsvcid": "4420", 00:17:13.422 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:13.422 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:13.422 "prchk_reftag": false, 00:17:13.422 "prchk_guard": false, 00:17:13.422 "hdgst": false, 00:17:13.422 "ddgst": false, 00:17:13.422 "dhchap_key": "key2", 00:17:13.422 "allow_unrecognized_csi": false, 00:17:13.422 "method": "bdev_nvme_attach_controller", 00:17:13.422 "req_id": 1 00:17:13.422 } 00:17:13.422 Got JSON-RPC error response 00:17:13.422 response: 00:17:13.422 { 00:17:13.422 "code": -5, 00:17:13.422 "message": "Input/output error" 00:17:13.422 } 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.422 request: 00:17:13.422 { 00:17:13.422 "name": "nvme0", 00:17:13.422 "trtype": "tcp", 00:17:13.422 "traddr": "10.0.0.1", 00:17:13.422 "adrfam": "ipv4", 00:17:13.422 "trsvcid": "4420", 00:17:13.422 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:13.422 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:13.422 "prchk_reftag": false, 00:17:13.422 "prchk_guard": false, 00:17:13.422 "hdgst": false, 00:17:13.422 "ddgst": false, 00:17:13.422 "dhchap_key": "key1", 00:17:13.422 "dhchap_ctrlr_key": "ckey2", 00:17:13.422 "allow_unrecognized_csi": false, 00:17:13.422 "method": "bdev_nvme_attach_controller", 00:17:13.422 "req_id": 1 00:17:13.422 } 00:17:13.422 Got JSON-RPC error response 00:17:13.422 response: 00:17:13.422 { 00:17:13.422 "code": -5, 00:17:13.422 "message": "Input/output error" 00:17:13.422 } 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.422 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.681 nvme0n1 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.681 request: 00:17:13.681 { 00:17:13.681 "name": "nvme0", 00:17:13.681 "dhchap_key": "key1", 00:17:13.681 "dhchap_ctrlr_key": "ckey2", 00:17:13.681 "method": "bdev_nvme_set_keys", 00:17:13.681 "req_id": 1 00:17:13.681 } 00:17:13.681 Got JSON-RPC error response 00:17:13.681 response: 00:17:13.681 { 00:17:13.681 "code": -13, 00:17:13.681 "message": "Permission denied" 00:17:13.681 } 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.681 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:13.682 10:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFkNmM2ZDFmNmU0MzIwMjNiODJmYjc2MjAxOTUyMzFiMDE2ZjBhMTY1MjZjM2M5IdVkbA==: 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: ]] 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTA1NWRhNzQ0ODI4YmQzMDYwZGQ2MTk2NTdkMWZjZjM1MDZlMWI3NDc1ZWE1OTE0qyWrTg==: 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.060 nvme0n1 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkYmJkOWVhMzJlY2MwOTJhYmYwNzVjNjdhY2Y1ODIxDfXM: 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: ]] 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzcwN2Y2OTMyNDc0NjI5NDJjNGE1NjU4YzgyMTM2MDaXQtfA: 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:15.060 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.061 request: 00:17:15.061 { 00:17:15.061 "name": "nvme0", 00:17:15.061 "dhchap_key": "key2", 00:17:15.061 "dhchap_ctrlr_key": "ckey1", 00:17:15.061 "method": "bdev_nvme_set_keys", 00:17:15.061 "req_id": 1 00:17:15.061 } 00:17:15.061 Got JSON-RPC error response 00:17:15.061 response: 00:17:15.061 { 00:17:15.061 "code": -13, 00:17:15.061 "message": "Permission denied" 00:17:15.061 } 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:15.061 10:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:15.997 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:15.997 rmmod nvme_tcp 00:17:15.997 rmmod nvme_fabrics 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 77842 ']' 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 77842 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 77842 ']' 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 77842 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77842 00:17:16.256 killing process with pid 77842 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77842' 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 77842 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 77842 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:16.256 10:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:16.256 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:16.256 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:16.516 10:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:17.452 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:17.452 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:17.452 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:17.452 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.VLI /tmp/spdk.key-null.dBp /tmp/spdk.key-sha256.xAB /tmp/spdk.key-sha384.frx /tmp/spdk.key-sha512.gFo /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:17.452 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:17.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:17.711 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:17.711 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:17.969 00:17:17.969 real 0m38.168s 00:17:17.969 user 0m34.728s 00:17:17.969 sys 0m3.906s 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:17.969 ************************************ 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.969 END TEST nvmf_auth_host 00:17:17.969 ************************************ 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.969 ************************************ 00:17:17.969 START TEST nvmf_digest 00:17:17.969 ************************************ 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:17.969 * Looking for test storage... 00:17:17.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.969 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:17.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.970 --rc genhtml_branch_coverage=1 00:17:17.970 --rc genhtml_function_coverage=1 00:17:17.970 --rc genhtml_legend=1 00:17:17.970 --rc geninfo_all_blocks=1 00:17:17.970 --rc geninfo_unexecuted_blocks=1 00:17:17.970 00:17:17.970 ' 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:17.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.970 --rc genhtml_branch_coverage=1 00:17:17.970 --rc genhtml_function_coverage=1 00:17:17.970 --rc genhtml_legend=1 00:17:17.970 --rc geninfo_all_blocks=1 00:17:17.970 --rc geninfo_unexecuted_blocks=1 00:17:17.970 00:17:17.970 ' 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:17.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.970 --rc genhtml_branch_coverage=1 00:17:17.970 --rc genhtml_function_coverage=1 00:17:17.970 --rc genhtml_legend=1 00:17:17.970 --rc geninfo_all_blocks=1 00:17:17.970 --rc geninfo_unexecuted_blocks=1 00:17:17.970 00:17:17.970 ' 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:17.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.970 --rc genhtml_branch_coverage=1 00:17:17.970 --rc genhtml_function_coverage=1 00:17:17.970 --rc genhtml_legend=1 00:17:17.970 --rc geninfo_all_blocks=1 00:17:17.970 --rc geninfo_unexecuted_blocks=1 00:17:17.970 00:17:17.970 ' 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.970 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:18.229 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:18.229 Cannot find device "nvmf_init_br" 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:18.229 Cannot find device "nvmf_init_br2" 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:18.229 Cannot find device "nvmf_tgt_br" 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.229 Cannot find device "nvmf_tgt_br2" 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:18.229 Cannot find device "nvmf_init_br" 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:18.229 Cannot find device "nvmf_init_br2" 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:18.229 Cannot find device "nvmf_tgt_br" 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:18.229 Cannot find device "nvmf_tgt_br2" 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:18.229 Cannot find device "nvmf_br" 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:18.229 Cannot find device "nvmf_init_if" 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:18.229 Cannot find device "nvmf_init_if2" 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:18.229 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:18.230 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:18.230 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:18.488 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:18.488 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:18.488 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:18.488 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:18.488 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:17:18.488 00:17:18.488 --- 10.0.0.3 ping statistics --- 00:17:18.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.488 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:18.488 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:18.488 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:17:18.488 00:17:18.488 --- 10.0.0.4 ping statistics --- 00:17:18.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.488 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:18.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:18.488 00:17:18.488 --- 10.0.0.1 ping statistics --- 00:17:18.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.488 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:18.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:17:18.488 00:17:18.488 --- 10.0.0.2 ping statistics --- 00:17:18.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.488 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:18.488 ************************************ 00:17:18.488 START TEST nvmf_digest_clean 00:17:18.488 ************************************ 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79499 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79499 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79499 ']' 00:17:18.488 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.489 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:18.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.489 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.489 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:18.489 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:18.746 [2024-11-12 10:39:07.252293] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:17:18.746 [2024-11-12 10:39:07.252398] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.746 [2024-11-12 10:39:07.401912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.746 [2024-11-12 10:39:07.431347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.746 [2024-11-12 10:39:07.431401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.747 [2024-11-12 10:39:07.431427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.747 [2024-11-12 10:39:07.431434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.747 [2024-11-12 10:39:07.431441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.747 [2024-11-12 10:39:07.431790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:19.005 [2024-11-12 10:39:07.600174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:19.005 null0 00:17:19.005 [2024-11-12 10:39:07.644531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.005 [2024-11-12 10:39:07.668766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79522 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79522 /var/tmp/bperf.sock 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79522 ']' 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:19.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:19.005 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:19.005 [2024-11-12 10:39:07.732123] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:17:19.005 [2024-11-12 10:39:07.732256] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79522 ] 00:17:19.264 [2024-11-12 10:39:07.887981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.264 [2024-11-12 10:39:07.926590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.199 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:20.199 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:20.199 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:20.199 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:20.199 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:20.457 [2024-11-12 10:39:08.988517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:20.457 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:20.457 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:20.716 nvme0n1 00:17:20.716 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:20.716 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:20.716 Running I/O for 2 seconds... 00:17:23.028 17272.00 IOPS, 67.47 MiB/s [2024-11-12T10:39:11.786Z] 17462.50 IOPS, 68.21 MiB/s 00:17:23.028 Latency(us) 00:17:23.028 [2024-11-12T10:39:11.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.028 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:23.028 nvme0n1 : 2.01 17450.10 68.16 0.00 0.00 7329.67 6613.18 22043.93 00:17:23.028 [2024-11-12T10:39:11.786Z] =================================================================================================================== 00:17:23.028 [2024-11-12T10:39:11.786Z] Total : 17450.10 68.16 0.00 0.00 7329.67 6613.18 22043.93 00:17:23.028 { 00:17:23.028 "results": [ 00:17:23.028 { 00:17:23.028 "job": "nvme0n1", 00:17:23.028 "core_mask": "0x2", 00:17:23.028 "workload": "randread", 00:17:23.028 "status": "finished", 00:17:23.028 "queue_depth": 128, 00:17:23.028 "io_size": 4096, 00:17:23.028 "runtime": 2.008756, 00:17:23.028 "iops": 17450.10344710856, 00:17:23.028 "mibps": 68.1644665902678, 00:17:23.028 "io_failed": 0, 00:17:23.028 "io_timeout": 0, 00:17:23.028 "avg_latency_us": 7329.666453552154, 00:17:23.028 "min_latency_us": 6613.178181818182, 00:17:23.028 "max_latency_us": 22043.927272727273 00:17:23.028 } 00:17:23.028 ], 00:17:23.028 "core_count": 1 00:17:23.028 } 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:23.028 | select(.opcode=="crc32c") 00:17:23.028 | "\(.module_name) \(.executed)"' 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79522 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79522 ']' 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79522 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:23.028 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:23.029 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79522 00:17:23.029 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:23.029 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:23.029 killing process with pid 79522 00:17:23.029 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79522' 00:17:23.029 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79522 00:17:23.029 Received shutdown signal, test time was about 2.000000 seconds 00:17:23.029 00:17:23.029 Latency(us) 00:17:23.029 [2024-11-12T10:39:11.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.029 [2024-11-12T10:39:11.787Z] =================================================================================================================== 00:17:23.029 [2024-11-12T10:39:11.787Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:23.029 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79522 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79578 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79578 /var/tmp/bperf.sock 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79578 ']' 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:23.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:23.287 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:23.287 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:23.287 Zero copy mechanism will not be used. 00:17:23.287 [2024-11-12 10:39:11.910023] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:17:23.287 [2024-11-12 10:39:11.910121] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79578 ] 00:17:23.546 [2024-11-12 10:39:12.047946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.546 [2024-11-12 10:39:12.076157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.546 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:23.546 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:23.546 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:23.546 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:23.546 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:23.805 [2024-11-12 10:39:12.423638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:23.805 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:23.805 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:24.064 nvme0n1 00:17:24.064 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:24.064 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:24.323 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:24.323 Zero copy mechanism will not be used. 00:17:24.323 Running I/O for 2 seconds... 00:17:26.237 8480.00 IOPS, 1060.00 MiB/s [2024-11-12T10:39:14.995Z] 8576.00 IOPS, 1072.00 MiB/s 00:17:26.237 Latency(us) 00:17:26.237 [2024-11-12T10:39:14.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.237 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:26.237 nvme0n1 : 2.00 8573.00 1071.62 0.00 0.00 1863.46 1608.61 11081.54 00:17:26.237 [2024-11-12T10:39:14.995Z] =================================================================================================================== 00:17:26.237 [2024-11-12T10:39:14.995Z] Total : 8573.00 1071.62 0.00 0.00 1863.46 1608.61 11081.54 00:17:26.237 { 00:17:26.237 "results": [ 00:17:26.237 { 00:17:26.237 "job": "nvme0n1", 00:17:26.237 "core_mask": "0x2", 00:17:26.237 "workload": "randread", 00:17:26.237 "status": "finished", 00:17:26.237 "queue_depth": 16, 00:17:26.237 "io_size": 131072, 00:17:26.237 "runtime": 2.002567, 00:17:26.237 "iops": 8572.996558916631, 00:17:26.237 "mibps": 1071.624569864579, 00:17:26.237 "io_failed": 0, 00:17:26.237 "io_timeout": 0, 00:17:26.237 "avg_latency_us": 1863.4600694738626, 00:17:26.237 "min_latency_us": 1608.610909090909, 00:17:26.237 "max_latency_us": 11081.541818181819 00:17:26.237 } 00:17:26.237 ], 00:17:26.237 "core_count": 1 00:17:26.237 } 00:17:26.237 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:26.237 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:26.237 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:26.237 | select(.opcode=="crc32c") 00:17:26.237 | "\(.module_name) \(.executed)"' 00:17:26.237 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:26.237 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79578 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79578 ']' 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79578 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79578 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:26.496 killing process with pid 79578 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79578' 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79578 00:17:26.496 Received shutdown signal, test time was about 2.000000 seconds 00:17:26.496 00:17:26.496 Latency(us) 00:17:26.496 [2024-11-12T10:39:15.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.496 [2024-11-12T10:39:15.254Z] =================================================================================================================== 00:17:26.496 [2024-11-12T10:39:15.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.496 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79578 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79631 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79631 /var/tmp/bperf.sock 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79631 ']' 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:26.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:26.755 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:26.755 [2024-11-12 10:39:15.347758] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:17:26.755 [2024-11-12 10:39:15.347858] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79631 ] 00:17:26.755 [2024-11-12 10:39:15.484265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.014 [2024-11-12 10:39:15.513401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.014 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:27.014 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:27.014 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:27.014 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:27.014 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:27.272 [2024-11-12 10:39:15.881664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:27.272 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:27.272 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:27.531 nvme0n1 00:17:27.531 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:27.531 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:27.789 Running I/O for 2 seconds... 00:17:29.661 18924.00 IOPS, 73.92 MiB/s [2024-11-12T10:39:18.419Z] 19114.00 IOPS, 74.66 MiB/s 00:17:29.661 Latency(us) 00:17:29.661 [2024-11-12T10:39:18.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.661 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.661 nvme0n1 : 2.01 19129.40 74.72 0.00 0.00 6685.63 4200.26 15371.17 00:17:29.661 [2024-11-12T10:39:18.419Z] =================================================================================================================== 00:17:29.661 [2024-11-12T10:39:18.419Z] Total : 19129.40 74.72 0.00 0.00 6685.63 4200.26 15371.17 00:17:29.661 { 00:17:29.661 "results": [ 00:17:29.661 { 00:17:29.661 "job": "nvme0n1", 00:17:29.661 "core_mask": "0x2", 00:17:29.661 "workload": "randwrite", 00:17:29.661 "status": "finished", 00:17:29.661 "queue_depth": 128, 00:17:29.661 "io_size": 4096, 00:17:29.661 "runtime": 2.005081, 00:17:29.661 "iops": 19129.401754841823, 00:17:29.661 "mibps": 74.72422560485087, 00:17:29.661 "io_failed": 0, 00:17:29.661 "io_timeout": 0, 00:17:29.661 "avg_latency_us": 6685.627354070479, 00:17:29.661 "min_latency_us": 4200.261818181818, 00:17:29.661 "max_latency_us": 15371.17090909091 00:17:29.661 } 00:17:29.661 ], 00:17:29.661 "core_count": 1 00:17:29.661 } 00:17:29.661 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:29.661 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:29.661 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:29.661 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:29.661 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:29.661 | select(.opcode=="crc32c") 00:17:29.661 | "\(.module_name) \(.executed)"' 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79631 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79631 ']' 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79631 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79631 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:29.921 killing process with pid 79631 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79631' 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79631 00:17:29.921 Received shutdown signal, test time was about 2.000000 seconds 00:17:29.921 00:17:29.921 Latency(us) 00:17:29.921 [2024-11-12T10:39:18.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.921 [2024-11-12T10:39:18.679Z] =================================================================================================================== 00:17:29.921 [2024-11-12T10:39:18.679Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.921 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79631 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79679 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79679 /var/tmp/bperf.sock 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79679 ']' 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:30.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:30.180 10:39:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:30.180 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:30.180 Zero copy mechanism will not be used. 00:17:30.180 [2024-11-12 10:39:18.824799] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:17:30.180 [2024-11-12 10:39:18.824906] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79679 ] 00:17:30.439 [2024-11-12 10:39:18.962274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.439 [2024-11-12 10:39:18.990259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.380 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:31.380 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:31.380 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:31.380 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:31.380 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:31.380 [2024-11-12 10:39:20.069018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:31.380 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:31.380 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:31.947 nvme0n1 00:17:31.947 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:31.947 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:31.947 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:31.947 Zero copy mechanism will not be used. 00:17:31.947 Running I/O for 2 seconds... 00:17:34.260 7029.00 IOPS, 878.62 MiB/s [2024-11-12T10:39:23.018Z] 7040.00 IOPS, 880.00 MiB/s 00:17:34.260 Latency(us) 00:17:34.260 [2024-11-12T10:39:23.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.260 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:34.260 nvme0n1 : 2.00 7035.85 879.48 0.00 0.00 2269.09 1832.03 10545.34 00:17:34.260 [2024-11-12T10:39:23.018Z] =================================================================================================================== 00:17:34.260 [2024-11-12T10:39:23.018Z] Total : 7035.85 879.48 0.00 0.00 2269.09 1832.03 10545.34 00:17:34.260 { 00:17:34.260 "results": [ 00:17:34.260 { 00:17:34.260 "job": "nvme0n1", 00:17:34.260 "core_mask": "0x2", 00:17:34.260 "workload": "randwrite", 00:17:34.260 "status": "finished", 00:17:34.260 "queue_depth": 16, 00:17:34.260 "io_size": 131072, 00:17:34.260 "runtime": 2.003455, 00:17:34.260 "iops": 7035.845576766136, 00:17:34.260 "mibps": 879.480697095767, 00:17:34.260 "io_failed": 0, 00:17:34.260 "io_timeout": 0, 00:17:34.260 "avg_latency_us": 2269.0893241151584, 00:17:34.260 "min_latency_us": 1832.0290909090909, 00:17:34.260 "max_latency_us": 10545.338181818182 00:17:34.260 } 00:17:34.260 ], 00:17:34.260 "core_count": 1 00:17:34.260 } 00:17:34.260 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:34.260 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:34.260 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:34.260 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:34.260 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:34.260 | select(.opcode=="crc32c") 00:17:34.260 | "\(.module_name) \(.executed)"' 00:17:34.260 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:34.260 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:34.260 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:34.260 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:34.261 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79679 00:17:34.261 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79679 ']' 00:17:34.261 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79679 00:17:34.261 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:34.261 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:34.261 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79679 00:17:34.261 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:34.261 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:34.261 killing process with pid 79679 00:17:34.261 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79679' 00:17:34.261 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79679 00:17:34.261 Received shutdown signal, test time was about 2.000000 seconds 00:17:34.261 00:17:34.261 Latency(us) 00:17:34.261 [2024-11-12T10:39:23.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.261 [2024-11-12T10:39:23.019Z] =================================================================================================================== 00:17:34.261 [2024-11-12T10:39:23.019Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:34.261 10:39:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79679 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79499 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79499 ']' 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79499 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79499 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:34.520 killing process with pid 79499 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79499' 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79499 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79499 00:17:34.520 00:17:34.520 real 0m16.003s 00:17:34.520 user 0m31.749s 00:17:34.520 sys 0m4.188s 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:34.520 ************************************ 00:17:34.520 END TEST nvmf_digest_clean 00:17:34.520 ************************************ 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:34.520 ************************************ 00:17:34.520 START TEST nvmf_digest_error 00:17:34.520 ************************************ 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79762 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79762 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 79762 ']' 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:34.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:34.520 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:34.778 [2024-11-12 10:39:23.302784] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:17:34.778 [2024-11-12 10:39:23.302886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.778 [2024-11-12 10:39:23.443242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.778 [2024-11-12 10:39:23.470482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.778 [2024-11-12 10:39:23.470549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.778 [2024-11-12 10:39:23.470575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.778 [2024-11-12 10:39:23.470598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.778 [2024-11-12 10:39:23.470605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.778 [2024-11-12 10:39:23.470917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:35.714 [2024-11-12 10:39:24.323460] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:35.714 [2024-11-12 10:39:24.359030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:35.714 null0 00:17:35.714 [2024-11-12 10:39:24.392000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.714 [2024-11-12 10:39:24.416063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79800 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79800 /var/tmp/bperf.sock 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 79800 ']' 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:35.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:35.714 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:35.973 [2024-11-12 10:39:24.482457] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:17:35.973 [2024-11-12 10:39:24.482570] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79800 ] 00:17:35.973 [2024-11-12 10:39:24.623634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.973 [2024-11-12 10:39:24.652543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.973 [2024-11-12 10:39:24.679671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:35.973 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:35.973 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:17:35.974 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:35.974 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:36.540 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:36.540 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.540 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:36.540 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.540 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:36.540 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:36.799 nvme0n1 00:17:36.799 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:36.799 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.799 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:36.799 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.799 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:36.799 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:36.799 Running I/O for 2 seconds... 00:17:36.799 [2024-11-12 10:39:25.485564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:36.799 [2024-11-12 10:39:25.485627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.799 [2024-11-12 10:39:25.485641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.799 [2024-11-12 10:39:25.501464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:36.799 [2024-11-12 10:39:25.501502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.799 [2024-11-12 10:39:25.501530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.799 [2024-11-12 10:39:25.517088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:36.799 [2024-11-12 10:39:25.517123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.799 [2024-11-12 10:39:25.517152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.799 [2024-11-12 10:39:25.533251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:36.799 [2024-11-12 10:39:25.533285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.800 [2024-11-12 10:39:25.533313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.800 [2024-11-12 10:39:25.549150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:36.800 [2024-11-12 10:39:25.549210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.800 [2024-11-12 10:39:25.549223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.565822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.565855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.565884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.580963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.580997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.581025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.596077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.596271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.596287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.611693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.611866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.611882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.626687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.626872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.626889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.641835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.642020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.642036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.656901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.656935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.656963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.672118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.672312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.672329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.687535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.687715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.687731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.702627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.702812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.702829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.717778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.717960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.717977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.732850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.732885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.732913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.747865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.748046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.748063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.762738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.762921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.762939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.778097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.778132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.778160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.793076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.793108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.793137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.059 [2024-11-12 10:39:25.807975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.059 [2024-11-12 10:39:25.808158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.059 [2024-11-12 10:39:25.808174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.318 [2024-11-12 10:39:25.824278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.318 [2024-11-12 10:39:25.824317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.318 [2024-11-12 10:39:25.824328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.318 [2024-11-12 10:39:25.839152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.318 [2024-11-12 10:39:25.839201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.318 [2024-11-12 10:39:25.839217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.318 [2024-11-12 10:39:25.854095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.318 [2024-11-12 10:39:25.854128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.318 [2024-11-12 10:39:25.854155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.318 [2024-11-12 10:39:25.869096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.318 [2024-11-12 10:39:25.869130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.318 [2024-11-12 10:39:25.869159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.318 [2024-11-12 10:39:25.884542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.318 [2024-11-12 10:39:25.884575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.318 [2024-11-12 10:39:25.884603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.318 [2024-11-12 10:39:25.899519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.318 [2024-11-12 10:39:25.899700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.318 [2024-11-12 10:39:25.899716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.318 [2024-11-12 10:39:25.914545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.319 [2024-11-12 10:39:25.914758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.319 [2024-11-12 10:39:25.914775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.319 [2024-11-12 10:39:25.929706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.319 [2024-11-12 10:39:25.929887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.319 [2024-11-12 10:39:25.929903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.319 [2024-11-12 10:39:25.944686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.319 [2024-11-12 10:39:25.944720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.319 [2024-11-12 10:39:25.944748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.319 [2024-11-12 10:39:25.959398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.319 [2024-11-12 10:39:25.959611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.319 [2024-11-12 10:39:25.959627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.319 [2024-11-12 10:39:25.974283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.319 [2024-11-12 10:39:25.974464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.319 [2024-11-12 10:39:25.974482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.319 [2024-11-12 10:39:25.989305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.319 [2024-11-12 10:39:25.989485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.319 [2024-11-12 10:39:25.989502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.319 [2024-11-12 10:39:26.004366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.319 [2024-11-12 10:39:26.004400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.319 [2024-11-12 10:39:26.004427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.319 [2024-11-12 10:39:26.019035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.319 [2024-11-12 10:39:26.019070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.319 [2024-11-12 10:39:26.019098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.319 [2024-11-12 10:39:26.033835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.319 [2024-11-12 10:39:26.033870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.319 [2024-11-12 10:39:26.033898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.319 [2024-11-12 10:39:26.048849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.319 [2024-11-12 10:39:26.048882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.319 [2024-11-12 10:39:26.048910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.319 [2024-11-12 10:39:26.063805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.319 [2024-11-12 10:39:26.063988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.319 [2024-11-12 10:39:26.064004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.578 [2024-11-12 10:39:26.080117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.578 [2024-11-12 10:39:26.080152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.578 [2024-11-12 10:39:26.080180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.578 [2024-11-12 10:39:26.095091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.578 [2024-11-12 10:39:26.095150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.578 [2024-11-12 10:39:26.095163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.110109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.110143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.110172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.125099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.125133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.125162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.140098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.140274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.140306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.156187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.156367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.156399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.171209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.171244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.171256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.186010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.186043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.186072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.200886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.200919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.200946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.215748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.215913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.215945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.230810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.230989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.231020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.246074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.246269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.246286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.261104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.261138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.261166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.276125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.276300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.276331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.291352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.291552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.291585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.306916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.307084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.307140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.579 [2024-11-12 10:39:26.322072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.579 [2024-11-12 10:39:26.322284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.579 [2024-11-12 10:39:26.322401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.838 [2024-11-12 10:39:26.339540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.838 [2024-11-12 10:39:26.339788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.838 [2024-11-12 10:39:26.340040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.838 [2024-11-12 10:39:26.357514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.838 [2024-11-12 10:39:26.357762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.357909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 [2024-11-12 10:39:26.373647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.373847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.373992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 [2024-11-12 10:39:26.388983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.389208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.389442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 [2024-11-12 10:39:26.404770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.404969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.405115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 [2024-11-12 10:39:26.420978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.421208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.421337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 [2024-11-12 10:39:26.440287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.440487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.440699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 16320.00 IOPS, 63.75 MiB/s [2024-11-12T10:39:26.597Z] [2024-11-12 10:39:26.466232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.466275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.466303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 [2024-11-12 10:39:26.482430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.482465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.482493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 [2024-11-12 10:39:26.498096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.498131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.498159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 [2024-11-12 10:39:26.513906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.513940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.513967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 [2024-11-12 10:39:26.529446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.529482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.529494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 [2024-11-12 10:39:26.545504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.545541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.545553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 [2024-11-12 10:39:26.562720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.562757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.562786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.839 [2024-11-12 10:39:26.580043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:37.839 [2024-11-12 10:39:26.580227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.839 [2024-11-12 10:39:26.580260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.098 [2024-11-12 10:39:26.596806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.098 [2024-11-12 10:39:26.596841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.098 [2024-11-12 10:39:26.596870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.098 [2024-11-12 10:39:26.613453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.098 [2024-11-12 10:39:26.613488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.098 [2024-11-12 10:39:26.613516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.098 [2024-11-12 10:39:26.628892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.098 [2024-11-12 10:39:26.628927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.098 [2024-11-12 10:39:26.628955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.098 [2024-11-12 10:39:26.644686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.098 [2024-11-12 10:39:26.644721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.098 [2024-11-12 10:39:26.644749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.098 [2024-11-12 10:39:26.660082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.098 [2024-11-12 10:39:26.660277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.660309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.099 [2024-11-12 10:39:26.675656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.099 [2024-11-12 10:39:26.675836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.675868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.099 [2024-11-12 10:39:26.691588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.099 [2024-11-12 10:39:26.691790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.691985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.099 [2024-11-12 10:39:26.708128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.099 [2024-11-12 10:39:26.708335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.708481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.099 [2024-11-12 10:39:26.724769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.099 [2024-11-12 10:39:26.725012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.725155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.099 [2024-11-12 10:39:26.741155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.099 [2024-11-12 10:39:26.741374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.741629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.099 [2024-11-12 10:39:26.757287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.099 [2024-11-12 10:39:26.757485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.757659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.099 [2024-11-12 10:39:26.773473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.099 [2024-11-12 10:39:26.773676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.773797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.099 [2024-11-12 10:39:26.788842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.099 [2024-11-12 10:39:26.789040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.789242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.099 [2024-11-12 10:39:26.804624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.099 [2024-11-12 10:39:26.804824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.805002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.099 [2024-11-12 10:39:26.820178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.099 [2024-11-12 10:39:26.820241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.820271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.099 [2024-11-12 10:39:26.836186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.099 [2024-11-12 10:39:26.836401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.836419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.099 [2024-11-12 10:39:26.854310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.099 [2024-11-12 10:39:26.854347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.099 [2024-11-12 10:39:26.854360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:26.870843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:26.870882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:26.870897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:26.888717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:26.888903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:26.888936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:26.906482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:26.906518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:26.906530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:26.923310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:26.923504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:26.923537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:26.939219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:26.939254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:26.939266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:26.954052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:26.954255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:26.954272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:26.969134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:26.969167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:26.969207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:26.984082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:26.984116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:26.984144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:26.999055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:26.999088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:26.999139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:27.014088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:27.014123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:27.014151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:27.028994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:27.029027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:27.029055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:27.043940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:27.044120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:27.044152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:27.058990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:27.059200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:27.059219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:27.074011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:27.074233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:27.074356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:27.089527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:27.089775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:27.090024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.359 [2024-11-12 10:39:27.105249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.359 [2024-11-12 10:39:27.105438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.359 [2024-11-12 10:39:27.105560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.618 [2024-11-12 10:39:27.121736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.618 [2024-11-12 10:39:27.121939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.618 [2024-11-12 10:39:27.122090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.618 [2024-11-12 10:39:27.137140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.618 [2024-11-12 10:39:27.137377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.618 [2024-11-12 10:39:27.137499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.152445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.152645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.152780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.167961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.168159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.168319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.183792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.183966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.183999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.198989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.199206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.199226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.214052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.214280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.214410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.229323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.229521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.229685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.244670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.244870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.245012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.260099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.260313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.260450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.275585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.275784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.275927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.290788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.290988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.291155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.306486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.306691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.306834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.321934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.322150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.322298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.337438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.337640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.337784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.353030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.353068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.353097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.619 [2024-11-12 10:39:27.370380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.619 [2024-11-12 10:39:27.370418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.619 [2024-11-12 10:39:27.370447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.878 [2024-11-12 10:39:27.387851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.878 [2024-11-12 10:39:27.388031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.878 [2024-11-12 10:39:27.388064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.878 [2024-11-12 10:39:27.405453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.878 [2024-11-12 10:39:27.405626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.878 [2024-11-12 10:39:27.405644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.878 [2024-11-12 10:39:27.423998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.878 [2024-11-12 10:39:27.424209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.878 [2024-11-12 10:39:27.424229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.878 [2024-11-12 10:39:27.442813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.878 [2024-11-12 10:39:27.442852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.878 [2024-11-12 10:39:27.442866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.878 [2024-11-12 10:39:27.459633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x702370) 00:17:38.878 [2024-11-12 10:39:27.459669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.878 [2024-11-12 10:39:27.459699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.878 16066.00 IOPS, 62.76 MiB/s 00:17:38.878 Latency(us) 00:17:38.878 [2024-11-12T10:39:27.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.878 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:38.878 nvme0n1 : 2.01 16044.20 62.67 0.00 0.00 7972.58 7089.80 32410.53 00:17:38.878 [2024-11-12T10:39:27.636Z] =================================================================================================================== 00:17:38.878 [2024-11-12T10:39:27.636Z] Total : 16044.20 62.67 0.00 0.00 7972.58 7089.80 32410.53 00:17:38.878 { 00:17:38.878 "results": [ 00:17:38.878 { 00:17:38.878 "job": "nvme0n1", 00:17:38.878 "core_mask": "0x2", 00:17:38.878 "workload": "randread", 00:17:38.878 "status": "finished", 00:17:38.878 "queue_depth": 128, 00:17:38.878 "io_size": 4096, 00:17:38.878 "runtime": 2.010696, 00:17:38.878 "iops": 16044.195641708146, 00:17:38.878 "mibps": 62.672639225422444, 00:17:38.878 "io_failed": 0, 00:17:38.878 "io_timeout": 0, 00:17:38.878 "avg_latency_us": 7972.5829381728, 00:17:38.878 "min_latency_us": 7089.8036363636365, 00:17:38.878 "max_latency_us": 32410.53090909091 00:17:38.878 } 00:17:38.878 ], 00:17:38.878 "core_count": 1 00:17:38.878 } 00:17:38.878 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:38.878 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:38.878 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:38.878 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:38.878 | .driver_specific 00:17:38.878 | .nvme_error 00:17:38.878 | .status_code 00:17:38.878 | .command_transient_transport_error' 00:17:39.136 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 126 > 0 )) 00:17:39.136 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79800 00:17:39.136 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 79800 ']' 00:17:39.136 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 79800 00:17:39.136 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:17:39.136 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:39.136 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79800 00:17:39.136 killing process with pid 79800 00:17:39.136 Received shutdown signal, test time was about 2.000000 seconds 00:17:39.136 00:17:39.136 Latency(us) 00:17:39.136 [2024-11-12T10:39:27.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.136 [2024-11-12T10:39:27.894Z] =================================================================================================================== 00:17:39.136 [2024-11-12T10:39:27.894Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.136 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:39.136 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:39.136 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79800' 00:17:39.136 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 79800 00:17:39.136 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 79800 00:17:39.395 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:39.395 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:39.396 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:39.396 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:39.396 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:39.396 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79847 00:17:39.396 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79847 /var/tmp/bperf.sock 00:17:39.396 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:39.396 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 79847 ']' 00:17:39.396 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:39.396 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:39.396 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:39.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:39.396 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:39.396 10:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:39.396 [2024-11-12 10:39:28.007552] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:17:39.396 [2024-11-12 10:39:28.007812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:17:39.396 Zero copy mechanism will not be used. 00:17:39.396 llocations --file-prefix=spdk_pid79847 ] 00:17:39.396 [2024-11-12 10:39:28.143093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.655 [2024-11-12 10:39:28.173589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.655 [2024-11-12 10:39:28.201081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:39.655 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:39.655 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:17:39.655 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:39.655 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:39.914 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:39.914 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.914 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:39.914 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.914 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:39.914 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:40.172 nvme0n1 00:17:40.172 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:40.172 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.172 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:40.172 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.172 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:40.172 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:40.432 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:40.432 Zero copy mechanism will not be used. 00:17:40.432 Running I/O for 2 seconds... 00:17:40.432 [2024-11-12 10:39:28.956520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:28.956602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:28.956617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:28.961337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:28.961374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:28.961387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:28.965710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:28.965750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:28.965778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:28.970100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:28.970136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:28.970163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:28.974420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:28.974456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:28.974484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:28.978561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:28.978596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:28.978623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:28.982770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:28.982807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:28.982835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:28.987090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:28.987152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:28.987167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:28.991481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:28.991537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:28.991564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:28.995650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:28.995686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:28.995713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:28.999808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:28.999859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:28.999887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:29.004097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:29.004133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:29.004160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:29.008353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:29.008388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:29.008415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:29.012507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:29.012543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:29.012555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:29.016725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:29.016760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:29.016787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:29.021090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:29.021125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:29.021152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:29.025212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:29.025251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:29.025278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:29.029306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:29.029341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:29.029367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:29.033329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:29.033362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:29.033388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:29.037671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.432 [2024-11-12 10:39:29.037706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.432 [2024-11-12 10:39:29.037732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.432 [2024-11-12 10:39:29.041650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.041684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.041711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.045767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.045818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.045830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.049709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.049742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.049769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.054068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.054104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.054131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.058027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.058062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.058089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.061971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.062005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.062032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.065927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.065963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.065989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.069969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.070005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.070017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.074463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.074498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.074525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.078935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.078986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.079027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.082978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.083014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.083026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.086911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.086961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.086987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.091286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.091323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.091336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.095237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.095273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.095301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.099304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.099341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.099353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.103338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.103375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.103402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.107616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.107650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.107677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.111550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.111584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.111611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.115486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.115535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.115561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.119446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.119495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.119536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.123618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.123656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.123684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.127805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.127840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.127867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.131828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.131863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.131889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.135902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.135936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.135963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.140195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.140267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.140294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.144307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.144357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.144384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.148427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.148477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.148504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.152593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.152660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.152689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.156691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.156742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.156769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.160741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.160793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.160820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.164820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.164870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.164897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.168842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.168893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.168920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.172898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.172947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.172974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.176950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.177001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.177028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.181052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.181103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.181131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.433 [2024-11-12 10:39:29.185486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.433 [2024-11-12 10:39:29.185552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.433 [2024-11-12 10:39:29.185564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.694 [2024-11-12 10:39:29.189808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.694 [2024-11-12 10:39:29.189859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.694 [2024-11-12 10:39:29.189887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.694 [2024-11-12 10:39:29.194224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.694 [2024-11-12 10:39:29.194282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.694 [2024-11-12 10:39:29.194310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.694 [2024-11-12 10:39:29.198213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.694 [2024-11-12 10:39:29.198264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.198275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.202102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.202153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.202180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.206084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.206135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.206162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.210081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.210132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.210159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.214108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.214159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.214186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.218044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.218095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.218121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.222049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.222101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.222127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.226067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.226118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.226145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.229993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.230043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.230070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.234022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.234073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.234100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.238075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.238126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.238153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.242073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.242123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.242150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.246142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.246216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.246229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.250069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.250120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.250147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.254037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.254089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.254115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.258022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.258073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.258100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.261952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.262003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.262029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.265941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.265992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.266019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.269906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.269956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.269983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.273904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.273955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.273982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.277909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.277959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.277986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.281887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.281937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.281964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.285897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.285947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.285975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.289968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.290018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.290046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.293942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.293992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.294019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.297939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.297990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.298017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.302005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.695 [2024-11-12 10:39:29.302056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-12 10:39:29.302084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.695 [2024-11-12 10:39:29.305952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.306003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.306045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.310010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.310061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.310087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.314126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.314202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.314216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.318083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.318134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.318161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.322117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.322168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.322206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.326059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.326110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.326137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.329978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.330029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.330055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.333950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.334001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.334028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.337982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.338032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.338059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.341963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.342014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.342041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.346008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.346058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.346085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.349950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.350001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.350028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.353914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.353963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.353990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.357938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.357989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.358016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.361923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.361974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.362001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.365924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.365975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.366001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.369927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.369978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.370006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.373934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.373983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.374021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.377896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.377947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.377974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.381858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.381909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.381936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.385834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.385884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.385911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.389848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.389899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.389926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.393829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.393879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.393906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.397902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.397951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.397978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.402134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.402211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.402224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.406526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.406608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.406635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.410793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.696 [2024-11-12 10:39:29.410844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-12 10:39:29.410856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-12 10:39:29.415300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.697 [2024-11-12 10:39:29.415338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-12 10:39:29.415361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.697 [2024-11-12 10:39:29.419771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.697 [2024-11-12 10:39:29.419840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-12 10:39:29.419852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.697 [2024-11-12 10:39:29.424469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.697 [2024-11-12 10:39:29.424523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-12 10:39:29.424536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.697 [2024-11-12 10:39:29.428754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.697 [2024-11-12 10:39:29.428804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-12 10:39:29.428830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.697 [2024-11-12 10:39:29.433096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.697 [2024-11-12 10:39:29.433147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-12 10:39:29.433175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.697 [2024-11-12 10:39:29.437357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.697 [2024-11-12 10:39:29.437394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-12 10:39:29.437423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.697 [2024-11-12 10:39:29.441686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.697 [2024-11-12 10:39:29.441739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-12 10:39:29.441766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.697 [2024-11-12 10:39:29.446310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.697 [2024-11-12 10:39:29.446357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-12 10:39:29.446386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.958 [2024-11-12 10:39:29.451059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.958 [2024-11-12 10:39:29.451119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-12 10:39:29.451150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.958 [2024-11-12 10:39:29.455352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.958 [2024-11-12 10:39:29.455391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-12 10:39:29.455404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.459527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.459576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.459613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.463353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.463391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.463404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.467290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.467333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.467347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.471042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.471093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.471146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.475001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.475051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.475078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.478919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.478969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.478995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.482851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.482901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.482928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.486862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.486913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.486940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.490904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.490955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.490982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.494892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.494943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.494970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.499173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.499220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.499233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.503564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.503611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.503639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.507984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.508034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.508061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.512571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.512626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.512639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.517437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.517473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.517500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.522024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.522075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.522102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.526516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.526583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.526611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.530817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.530868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.530895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.535249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.535285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.535312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.539251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.539287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.539300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.543206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.543239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.543266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.547023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.547073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.547100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.550986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.551035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.551062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.554997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.555046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.555073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.558993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.559028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.559040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.563048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.563098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.563164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.566929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.566978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.567005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.570849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.570898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.570924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.574864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.574914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.574925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.578864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.578913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.578939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.582814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.582864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.582891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.586765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.586815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.586841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.590738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.590787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.590813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.594643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.594692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.594719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.598816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.598867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.598879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.603165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.603220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.603233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.607214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.607249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.607276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.611149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.611198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.611213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.615014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.615064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.615090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.619001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.619051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.619077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-12 10:39:29.622939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.959 [2024-11-12 10:39:29.622989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-12 10:39:29.623015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.626908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.626958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.626985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.630851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.630901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.630927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.634779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.634828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.634855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.638676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.638725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.638752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.642589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.642637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.642664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.646600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.646649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.646676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.650505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.650554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.650580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.654392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.654441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.654467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.658298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.658347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.658373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.662164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.662224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.662252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.666088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.666138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.666164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.669939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.669990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.670017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.673824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.673858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.673884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.677743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.677777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.677803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.681678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.681712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.681738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.685491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.685526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.685552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.689452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.689486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.689513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.693332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.693365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.693392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.697153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.697214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.697242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.701108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.701158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.701185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.705129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.705204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.705217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.709209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.709255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.709267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.960 [2024-11-12 10:39:29.713568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:40.960 [2024-11-12 10:39:29.713605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.960 [2024-11-12 10:39:29.713617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.717630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.717664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.717691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.721958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.722007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.722033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.725847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.725881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.725907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.729830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.729864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.729890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.733840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.733875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.733901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.737854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.737889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.737916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.741820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.741855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.741881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.745838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.745872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.745899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.749818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.749852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.749879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.753980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.754030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.754058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.758006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.758056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.758082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.762034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.762084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.762110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.766038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.766087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.766113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.770146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.770223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.770238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.774140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.774215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.774244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.778243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.778291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.778318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.782118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.782167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.782205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.786044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.786093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.786118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.790021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.790071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.790097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.793983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.794032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.794059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.797987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.798036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.798063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.801986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.802035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.802061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.805970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.806019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.806046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.809868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.223 [2024-11-12 10:39:29.809902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.223 [2024-11-12 10:39:29.809928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.223 [2024-11-12 10:39:29.813830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.813865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.813892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.817769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.817804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.817832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.821629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.821663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.821689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.825590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.825624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.825651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.829476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.829510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.829537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.833436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.833471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.833498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.837295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.837328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.837355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.841123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.841173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.841209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.845102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.845153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.845180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.849057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.849106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.849133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.853050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.853099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.853125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.857072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.857123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.857149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.861114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.861164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.861201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.865039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.865089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.865116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.868987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.869037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.869063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.872999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.873049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.873076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.876933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.876983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.877010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.880780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.880829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.880855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.884776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.884826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.884853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.888778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.888827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.888854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.892684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.892733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.892760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.896562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.896610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.896637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.900483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.900533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.900559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.904452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.904500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.904526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.908417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.908467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.908493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.912327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.912376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.912403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.916293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.916341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.916367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.920240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.920288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.920315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.924159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.924219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.924246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.928078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.928128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.928155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.932003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.932052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.932079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.935920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.935968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.935994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.939929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.939977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.940004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.943844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.943893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.943920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.947762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.947811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.947837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.224 7611.00 IOPS, 951.38 MiB/s [2024-11-12T10:39:29.982Z] [2024-11-12 10:39:29.952719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.952768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.952795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.956694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.956744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.956771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.960711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.224 [2024-11-12 10:39:29.960762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.224 [2024-11-12 10:39:29.960789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.224 [2024-11-12 10:39:29.964653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.225 [2024-11-12 10:39:29.964703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.225 [2024-11-12 10:39:29.964729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.225 [2024-11-12 10:39:29.968637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.225 [2024-11-12 10:39:29.968686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.225 [2024-11-12 10:39:29.968712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.225 [2024-11-12 10:39:29.972513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.225 [2024-11-12 10:39:29.972561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.225 [2024-11-12 10:39:29.972587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.225 [2024-11-12 10:39:29.976763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.225 [2024-11-12 10:39:29.976814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.225 [2024-11-12 10:39:29.976840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:29.981041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:29.981092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:29.981103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:29.985321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:29.985370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:29.985397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:29.989204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:29.989261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:29.989288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:29.993064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:29.993113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:29.993139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:29.996983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:29.997032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:29.997059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.001049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.001099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.001126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.005606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.005647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.005675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.010047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.010101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.010130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.014586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.014637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.014664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.019048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.019127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.019159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.023635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.023675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.023688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.028288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.028327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.028341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.032965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.033017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.033044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.037422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.037474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.037487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.041756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.041791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.041819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.046018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.046069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.046097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.050362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.050413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.050426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.054770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.054819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.054846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.058794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.058844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.058871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.062937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.062986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.063013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.067062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.485 [2024-11-12 10:39:30.067135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.485 [2024-11-12 10:39:30.067149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.485 [2024-11-12 10:39:30.071447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.071528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.071568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.075566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.075615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.075641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.079621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.079671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.079698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.083698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.083747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.083774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.087752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.087801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.087829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.092013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.092062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.092089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.096109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.096159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.096186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.100185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.100245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.100272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.104539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.104604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.104631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.108709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.108758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.108785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.112761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.112810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.112836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.116956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.117006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.117033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.121227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.121275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.121302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.125232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.125280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.125307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.129195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.129244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.129271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.133161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.133221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.133248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.137328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.137376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.137403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.141366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.141415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.141443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.145688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.145725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.145753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.149879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.149930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.149958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.154446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.154497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.154524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.159025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.159092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.159145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.163600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.163652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.163681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.168317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.168353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.168366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.172855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.172907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.172949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.177148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.177225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.177240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.181392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.486 [2024-11-12 10:39:30.181442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.486 [2024-11-12 10:39:30.181470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.486 [2024-11-12 10:39:30.185828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.185880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.185908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.487 [2024-11-12 10:39:30.190104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.190156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.190183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.487 [2024-11-12 10:39:30.194269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.194319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.194347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.487 [2024-11-12 10:39:30.198857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.198908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.198935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.487 [2024-11-12 10:39:30.203139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.203209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.203224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.487 [2024-11-12 10:39:30.207167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.207215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.207243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.487 [2024-11-12 10:39:30.211412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.211475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.211502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.487 [2024-11-12 10:39:30.215484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.215543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.215570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.487 [2024-11-12 10:39:30.219423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.219481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.219508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.487 [2024-11-12 10:39:30.223448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.223486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.223499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.487 [2024-11-12 10:39:30.227399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.227436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.227449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.487 [2024-11-12 10:39:30.231827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.231878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.231890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.487 [2024-11-12 10:39:30.236335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.487 [2024-11-12 10:39:30.236386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.487 [2024-11-12 10:39:30.236414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.241209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.241286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.241299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.245402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.245454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.245466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.249616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.249666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.249694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.253737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.253787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.253813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.257810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.257861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.257888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.261812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.261862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.261889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.266067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.266118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.266144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.270035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.270086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.270113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.274201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.274249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.274276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.278408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.278458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.278485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.282432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.282482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.282509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.286417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.286466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.286493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.290587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.290658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.290671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.294614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.294665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.294691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.298623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.298673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.298700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.302828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.302879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.302906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.307277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.307316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.307329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.311285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.311321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.311349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.315342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.315377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.315405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.748 [2024-11-12 10:39:30.319384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.748 [2024-11-12 10:39:30.319419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.748 [2024-11-12 10:39:30.319462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.323229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.323263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.323275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.327071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.327160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.327173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.331082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.331161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.331210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.335040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.335090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.335142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.339074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.339149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.339163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.343035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.343085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.343135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.347078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.347152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.347166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.351019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.351070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.351096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.355045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.355095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.355146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.359022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.359073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.359099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.363011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.363060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.363087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.366875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.366924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.366950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.370810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.370858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.370885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.374817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.374866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.374892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.378702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.378750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.378777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.382624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.382673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.382699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.386492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.386540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.386567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.390460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.390524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.390551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.394286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.394334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.394361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.398307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.398342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.398369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.402270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.402318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.402344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.406235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.406284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.406311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.410129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.410206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.410219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.414121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.414171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.414225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.418071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.418121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.418148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.421996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.422045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.422072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.426062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.749 [2024-11-12 10:39:30.426093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.749 [2024-11-12 10:39:30.426132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.749 [2024-11-12 10:39:30.430392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.430443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.430471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.434512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.434562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.434589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.438886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.438938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.438966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.443575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.443610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.443622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.448231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.448295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.448309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.452673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.452721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.452748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.457037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.457086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.457113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.461277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.461325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.461338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.465263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.465312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.465338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.469286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.469335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.469347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.473122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.473172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.473209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.477061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.477111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.477138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.481066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.481116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.481142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.485043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.485093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.485120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.489121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.489171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.489208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.493170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.493229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.493256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.497247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.497295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.497322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.750 [2024-11-12 10:39:30.501507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:41.750 [2024-11-12 10:39:30.501558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.750 [2024-11-12 10:39:30.501569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.011 [2024-11-12 10:39:30.505612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.011 [2024-11-12 10:39:30.505662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.011 [2024-11-12 10:39:30.505688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.011 [2024-11-12 10:39:30.509849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.011 [2024-11-12 10:39:30.509914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.011 [2024-11-12 10:39:30.509940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.011 [2024-11-12 10:39:30.513866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.011 [2024-11-12 10:39:30.513916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.011 [2024-11-12 10:39:30.513942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.011 [2024-11-12 10:39:30.517811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.011 [2024-11-12 10:39:30.517860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.011 [2024-11-12 10:39:30.517887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.011 [2024-11-12 10:39:30.522181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.011 [2024-11-12 10:39:30.522241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.011 [2024-11-12 10:39:30.522268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.011 [2024-11-12 10:39:30.526475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.011 [2024-11-12 10:39:30.526525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.011 [2024-11-12 10:39:30.526568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.011 [2024-11-12 10:39:30.530937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.011 [2024-11-12 10:39:30.531003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.011 [2024-11-12 10:39:30.531030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.011 [2024-11-12 10:39:30.535388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.011 [2024-11-12 10:39:30.535438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.011 [2024-11-12 10:39:30.535467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.540102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.540153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.540180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.544394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.544442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.544470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.548660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.548696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.548723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.553169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.553228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.553255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.557438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.557488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.557514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.561717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.561769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.561796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.565833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.565883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.565910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.569906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.569956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.569983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.573949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.573998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.574025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.577913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.577963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.577990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.582063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.582112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.582139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.586071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.586121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.586147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.590041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.590091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.590117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.594017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.594066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.594093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.597963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.598013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.598039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.601955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.602006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.602032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.605988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.606037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.606064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.609983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.610032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.610059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.614036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.614086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.614128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.618051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.618101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.618128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.621993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.622043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.622069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.625958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.626008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.626034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.629983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.630033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.630059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.012 [2024-11-12 10:39:30.633921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.012 [2024-11-12 10:39:30.633986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.012 [2024-11-12 10:39:30.634012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.638004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.638054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.638081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.642367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.642416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.642443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.646606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.646639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.646666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.650614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.650649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.650676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.654645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.654679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.654706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.658793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.658827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.658853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.662885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.662920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.662962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.666975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.667025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.667051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.670892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.670927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.670953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.674763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.674796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.674822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.678650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.678684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.678711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.682585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.682618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.682644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.686476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.686536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.686564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.690427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.690476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.690518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.694365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.694414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.694440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.698235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.698283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.698309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.702254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.702303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.702330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.706137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.706212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.706226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.710149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.710224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.710238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.714059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.714108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.714134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.718009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.718058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.718085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.721917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.721966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.721992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.725803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.725852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.725878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.729816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.729864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.729891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.733870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.733918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.733944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.737795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.737845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.737872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.013 [2024-11-12 10:39:30.741806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.013 [2024-11-12 10:39:30.741855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.013 [2024-11-12 10:39:30.741881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.014 [2024-11-12 10:39:30.745815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.014 [2024-11-12 10:39:30.745864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.014 [2024-11-12 10:39:30.745891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.014 [2024-11-12 10:39:30.749810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.014 [2024-11-12 10:39:30.749859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.014 [2024-11-12 10:39:30.749885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.014 [2024-11-12 10:39:30.753771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.014 [2024-11-12 10:39:30.753820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.014 [2024-11-12 10:39:30.753847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.014 [2024-11-12 10:39:30.757823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.014 [2024-11-12 10:39:30.757873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.014 [2024-11-12 10:39:30.757899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.014 [2024-11-12 10:39:30.761886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.014 [2024-11-12 10:39:30.761937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.014 [2024-11-12 10:39:30.761949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.274 [2024-11-12 10:39:30.766281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.274 [2024-11-12 10:39:30.766331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.274 [2024-11-12 10:39:30.766358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.274 [2024-11-12 10:39:30.770260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.274 [2024-11-12 10:39:30.770309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.274 [2024-11-12 10:39:30.770335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.274 [2024-11-12 10:39:30.774427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.274 [2024-11-12 10:39:30.774476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.774504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.778275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.778323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.778349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.782137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.782211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.782224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.786011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.786060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.786086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.790019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.790069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.790096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.794073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.794123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.794149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.798002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.798052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.798078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.801963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.802013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.802039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.805963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.806012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.806038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.809861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.809911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.809938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.813749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.813798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.813825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.817700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.817749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.817776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.821717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.821765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.821792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.825689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.825738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.825765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.829572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.829638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.829664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.833593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.833643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.833669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.837577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.837642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.837669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.841599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.841648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.841674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.845635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.845684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.845711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.849642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.849692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.849718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.853601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.853650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.853676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.857679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.857728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.857755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.861643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.861692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.861719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.865545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.865579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.865605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.869444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.869478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.869505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.873369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.873403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.873429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.877254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.275 [2024-11-12 10:39:30.877285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.275 [2024-11-12 10:39:30.877311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.275 [2024-11-12 10:39:30.881171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.881230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.881257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.885123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.885173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.885210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.889063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.889112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.889139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.893207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.893241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.893267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.897173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.897233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.897260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.901158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.901218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.901244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.905274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.905304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.905331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.909249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.909281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.909308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.913215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.913246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.913273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.917218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.917249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.917275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.921090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.921140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.921167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.924997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.925046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.925073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.928958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.929007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.929034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.933024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.933074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.933100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.936983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.937033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.937059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.941058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.941107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.941134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.945066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.945115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.945142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.276 [2024-11-12 10:39:30.948998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ac400) 00:17:42.276 [2024-11-12 10:39:30.949048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.276 [2024-11-12 10:39:30.949075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.276 7579.50 IOPS, 947.44 MiB/s 00:17:42.276 Latency(us) 00:17:42.276 [2024-11-12T10:39:31.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.276 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:42.276 nvme0n1 : 2.00 7577.25 947.16 0.00 0.00 2108.62 1705.43 5421.61 00:17:42.276 [2024-11-12T10:39:31.034Z] =================================================================================================================== 00:17:42.276 [2024-11-12T10:39:31.034Z] Total : 7577.25 947.16 0.00 0.00 2108.62 1705.43 5421.61 00:17:42.276 { 00:17:42.276 "results": [ 00:17:42.276 { 00:17:42.276 "job": "nvme0n1", 00:17:42.276 "core_mask": "0x2", 00:17:42.276 "workload": "randread", 00:17:42.276 "status": "finished", 00:17:42.276 "queue_depth": 16, 00:17:42.276 "io_size": 131072, 00:17:42.276 "runtime": 2.002706, 00:17:42.276 "iops": 7577.2479834783535, 00:17:42.276 "mibps": 947.1559979347942, 00:17:42.276 "io_failed": 0, 00:17:42.276 "io_timeout": 0, 00:17:42.276 "avg_latency_us": 2108.6200638610158, 00:17:42.276 "min_latency_us": 1705.4254545454546, 00:17:42.276 "max_latency_us": 5421.614545454546 00:17:42.276 } 00:17:42.276 ], 00:17:42.276 "core_count": 1 00:17:42.276 } 00:17:42.276 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:42.276 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:42.276 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:42.276 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:42.276 | .driver_specific 00:17:42.276 | .nvme_error 00:17:42.276 | .status_code 00:17:42.276 | .command_transient_transport_error' 00:17:42.536 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 489 > 0 )) 00:17:42.536 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79847 00:17:42.536 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 79847 ']' 00:17:42.536 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 79847 00:17:42.536 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:17:42.536 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:42.536 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79847 00:17:42.536 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:42.536 killing process with pid 79847 00:17:42.536 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:42.536 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79847' 00:17:42.536 Received shutdown signal, test time was about 2.000000 seconds 00:17:42.536 00:17:42.536 Latency(us) 00:17:42.536 [2024-11-12T10:39:31.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.536 [2024-11-12T10:39:31.294Z] =================================================================================================================== 00:17:42.536 [2024-11-12T10:39:31.294Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.536 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 79847 00:17:42.536 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 79847 00:17:42.795 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79894 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79894 /var/tmp/bperf.sock 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 79894 ']' 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:42.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:42.796 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:42.796 [2024-11-12 10:39:31.465216] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:17:42.796 [2024-11-12 10:39:31.465300] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79894 ] 00:17:43.054 [2024-11-12 10:39:31.603320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.054 [2024-11-12 10:39:31.632805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.054 [2024-11-12 10:39:31.660120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:43.054 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:43.054 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:17:43.054 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:43.054 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:43.313 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:43.313 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.313 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:43.313 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.313 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:43.313 10:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:43.572 nvme0n1 00:17:43.572 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:43.572 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.572 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:43.572 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.572 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:43.572 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:43.831 Running I/O for 2 seconds... 00:17:43.831 [2024-11-12 10:39:32.415368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fef90 00:17:43.831 [2024-11-12 10:39:32.417989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.831 [2024-11-12 10:39:32.418045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.831 [2024-11-12 10:39:32.430661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166feb58 00:17:43.831 [2024-11-12 10:39:32.433153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.831 [2024-11-12 10:39:32.433228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:43.831 [2024-11-12 10:39:32.445599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fe2e8 00:17:43.831 [2024-11-12 10:39:32.448061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.831 [2024-11-12 10:39:32.448109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:43.831 [2024-11-12 10:39:32.460499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fda78 00:17:43.831 [2024-11-12 10:39:32.462773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.831 [2024-11-12 10:39:32.462820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:43.831 [2024-11-12 10:39:32.475499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fd208 00:17:43.831 [2024-11-12 10:39:32.478024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.831 [2024-11-12 10:39:32.478072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:43.831 [2024-11-12 10:39:32.492050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fc998 00:17:43.831 [2024-11-12 10:39:32.494592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.831 [2024-11-12 10:39:32.494639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:43.831 [2024-11-12 10:39:32.508514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fc128 00:17:43.831 [2024-11-12 10:39:32.511046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.831 [2024-11-12 10:39:32.511096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:43.831 [2024-11-12 10:39:32.525101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fb8b8 00:17:43.831 [2024-11-12 10:39:32.527722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.831 [2024-11-12 10:39:32.527756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:43.831 [2024-11-12 10:39:32.541417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fb048 00:17:43.831 [2024-11-12 10:39:32.543871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.831 [2024-11-12 10:39:32.543920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:43.832 [2024-11-12 10:39:32.556721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fa7d8 00:17:43.832 [2024-11-12 10:39:32.559042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.832 [2024-11-12 10:39:32.559093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:43.832 [2024-11-12 10:39:32.572450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f9f68 00:17:43.832 [2024-11-12 10:39:32.574983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.832 [2024-11-12 10:39:32.575031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.589946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f96f8 00:17:44.091 [2024-11-12 10:39:32.592413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.592462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.605952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f8e88 00:17:44.091 [2024-11-12 10:39:32.608245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.608301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.620981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f8618 00:17:44.091 [2024-11-12 10:39:32.623260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.623294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.635849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f7da8 00:17:44.091 [2024-11-12 10:39:32.638221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.638276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.650918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f7538 00:17:44.091 [2024-11-12 10:39:32.653140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.653192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.665795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f6cc8 00:17:44.091 [2024-11-12 10:39:32.668035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.668083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.680184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f6458 00:17:44.091 [2024-11-12 10:39:32.682185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.682231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.694087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f5be8 00:17:44.091 [2024-11-12 10:39:32.696197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.696251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.708584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f5378 00:17:44.091 [2024-11-12 10:39:32.710581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.710627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.722626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f4b08 00:17:44.091 [2024-11-12 10:39:32.724757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.724804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.736902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f4298 00:17:44.091 [2024-11-12 10:39:32.738869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.738913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.750978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f3a28 00:17:44.091 [2024-11-12 10:39:32.753036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.753082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.765323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f31b8 00:17:44.091 [2024-11-12 10:39:32.767318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.767355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.779354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f2948 00:17:44.091 [2024-11-12 10:39:32.781261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.781307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:44.091 [2024-11-12 10:39:32.793502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f20d8 00:17:44.091 [2024-11-12 10:39:32.795520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.091 [2024-11-12 10:39:32.795563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:44.092 [2024-11-12 10:39:32.807785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f1868 00:17:44.092 [2024-11-12 10:39:32.809611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.092 [2024-11-12 10:39:32.809657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:44.092 [2024-11-12 10:39:32.821655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f0ff8 00:17:44.092 [2024-11-12 10:39:32.823592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.092 [2024-11-12 10:39:32.823622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:44.092 [2024-11-12 10:39:32.835643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f0788 00:17:44.092 [2024-11-12 10:39:32.837430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.092 [2024-11-12 10:39:32.837476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:32.850804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166eff18 00:17:44.351 [2024-11-12 10:39:32.852800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.351 [2024-11-12 10:39:32.852847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:32.865139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ef6a8 00:17:44.351 [2024-11-12 10:39:32.866924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.351 [2024-11-12 10:39:32.866971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:32.879442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166eee38 00:17:44.351 [2024-11-12 10:39:32.881301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.351 [2024-11-12 10:39:32.881348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:32.893609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ee5c8 00:17:44.351 [2024-11-12 10:39:32.895463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.351 [2024-11-12 10:39:32.895520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:32.907948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166edd58 00:17:44.351 [2024-11-12 10:39:32.909713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.351 [2024-11-12 10:39:32.909758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:32.922028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ed4e8 00:17:44.351 [2024-11-12 10:39:32.923950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.351 [2024-11-12 10:39:32.923995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:32.936333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ecc78 00:17:44.351 [2024-11-12 10:39:32.938048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.351 [2024-11-12 10:39:32.938096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:32.950452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ec408 00:17:44.351 [2024-11-12 10:39:32.952247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.351 [2024-11-12 10:39:32.952301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:32.964630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ebb98 00:17:44.351 [2024-11-12 10:39:32.966476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.351 [2024-11-12 10:39:32.966522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:32.979576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166eb328 00:17:44.351 [2024-11-12 10:39:32.981278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.351 [2024-11-12 10:39:32.981324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:32.993722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166eaab8 00:17:44.351 [2024-11-12 10:39:32.995490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.351 [2024-11-12 10:39:32.995547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:33.007928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ea248 00:17:44.351 [2024-11-12 10:39:33.009579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.351 [2024-11-12 10:39:33.009625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:44.351 [2024-11-12 10:39:33.022034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e99d8 00:17:44.352 [2024-11-12 10:39:33.023811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.352 [2024-11-12 10:39:33.023840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:44.352 [2024-11-12 10:39:33.036307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e9168 00:17:44.352 [2024-11-12 10:39:33.037909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.352 [2024-11-12 10:39:33.037955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:44.352 [2024-11-12 10:39:33.050401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e88f8 00:17:44.352 [2024-11-12 10:39:33.052098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.352 [2024-11-12 10:39:33.052144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:44.352 [2024-11-12 10:39:33.064713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e8088 00:17:44.352 [2024-11-12 10:39:33.066270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.352 [2024-11-12 10:39:33.066316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:44.352 [2024-11-12 10:39:33.078776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e7818 00:17:44.352 [2024-11-12 10:39:33.080462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.352 [2024-11-12 10:39:33.080508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:44.352 [2024-11-12 10:39:33.092996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e6fa8 00:17:44.352 [2024-11-12 10:39:33.094574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.352 [2024-11-12 10:39:33.094620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.108038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e6738 00:17:44.611 [2024-11-12 10:39:33.109594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.109641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.122528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e5ec8 00:17:44.611 [2024-11-12 10:39:33.124154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.124225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.136757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e5658 00:17:44.611 [2024-11-12 10:39:33.138231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.138284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.150841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e4de8 00:17:44.611 [2024-11-12 10:39:33.152393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.152439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.164959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e4578 00:17:44.611 [2024-11-12 10:39:33.166467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.166514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.179077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e3d08 00:17:44.611 [2024-11-12 10:39:33.180584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.180630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.193261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e3498 00:17:44.611 [2024-11-12 10:39:33.194710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.194758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.207500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e2c28 00:17:44.611 [2024-11-12 10:39:33.208891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.208938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.221653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e23b8 00:17:44.611 [2024-11-12 10:39:33.223033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.223080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.235831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e1b48 00:17:44.611 [2024-11-12 10:39:33.237178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.237230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.249876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e12d8 00:17:44.611 [2024-11-12 10:39:33.251301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.251335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.264291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e0a68 00:17:44.611 [2024-11-12 10:39:33.265653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.265700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.278391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e01f8 00:17:44.611 [2024-11-12 10:39:33.279818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.279848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.292562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166df988 00:17:44.611 [2024-11-12 10:39:33.293897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.293944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.306903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166df118 00:17:44.611 [2024-11-12 10:39:33.308316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.308362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:44.611 [2024-11-12 10:39:33.321098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166de8a8 00:17:44.611 [2024-11-12 10:39:33.322382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.611 [2024-11-12 10:39:33.322428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:44.612 [2024-11-12 10:39:33.335150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166de038 00:17:44.612 [2024-11-12 10:39:33.336433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.612 [2024-11-12 10:39:33.336479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:44.612 [2024-11-12 10:39:33.355077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166de038 00:17:44.612 [2024-11-12 10:39:33.357424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.612 [2024-11-12 10:39:33.357470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.369972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166de8a8 00:17:44.871 [2024-11-12 10:39:33.372623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.372669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.384523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166df118 00:17:44.871 [2024-11-12 10:39:33.386790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.386836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:44.871 17206.00 IOPS, 67.21 MiB/s [2024-11-12T10:39:33.629Z] [2024-11-12 10:39:33.400334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166df988 00:17:44.871 [2024-11-12 10:39:33.402615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.402663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.414422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e01f8 00:17:44.871 [2024-11-12 10:39:33.416734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.416781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.428555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e0a68 00:17:44.871 [2024-11-12 10:39:33.430798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.430844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.442776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e12d8 00:17:44.871 [2024-11-12 10:39:33.445035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.445081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.457026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e1b48 00:17:44.871 [2024-11-12 10:39:33.459298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.459331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.471069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e23b8 00:17:44.871 [2024-11-12 10:39:33.473274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.473321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.485288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e2c28 00:17:44.871 [2024-11-12 10:39:33.487563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.487594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.500431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e3498 00:17:44.871 [2024-11-12 10:39:33.502851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.502898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.516958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e3d08 00:17:44.871 [2024-11-12 10:39:33.519332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.519366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.532098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e4578 00:17:44.871 [2024-11-12 10:39:33.534207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.534252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.546234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e4de8 00:17:44.871 [2024-11-12 10:39:33.548417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.548463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.560462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e5658 00:17:44.871 [2024-11-12 10:39:33.562491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.562537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.574378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e5ec8 00:17:44.871 [2024-11-12 10:39:33.576504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.576550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.589735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e6738 00:17:44.871 [2024-11-12 10:39:33.592150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.592210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.607634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e6fa8 00:17:44.871 [2024-11-12 10:39:33.610035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:44.871 [2024-11-12 10:39:33.610083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:44.871 [2024-11-12 10:39:33.625214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e7818 00:17:45.130 [2024-11-12 10:39:33.627583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.130 [2024-11-12 10:39:33.627618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:45.130 [2024-11-12 10:39:33.642236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e8088 00:17:45.130 [2024-11-12 10:39:33.644560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.130 [2024-11-12 10:39:33.644596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:45.130 [2024-11-12 10:39:33.659265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e88f8 00:17:45.130 [2024-11-12 10:39:33.661660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.130 [2024-11-12 10:39:33.661696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:45.130 [2024-11-12 10:39:33.676820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e9168 00:17:45.130 [2024-11-12 10:39:33.679034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.130 [2024-11-12 10:39:33.679070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:45.130 [2024-11-12 10:39:33.693756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166e99d8 00:17:45.130 [2024-11-12 10:39:33.696088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.130 [2024-11-12 10:39:33.696135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:45.130 [2024-11-12 10:39:33.710693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ea248 00:17:45.130 [2024-11-12 10:39:33.712862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.130 [2024-11-12 10:39:33.712900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:45.131 [2024-11-12 10:39:33.727524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166eaab8 00:17:45.131 [2024-11-12 10:39:33.729632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.131 [2024-11-12 10:39:33.729667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:45.131 [2024-11-12 10:39:33.744605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166eb328 00:17:45.131 [2024-11-12 10:39:33.746740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.131 [2024-11-12 10:39:33.746776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:45.131 [2024-11-12 10:39:33.761106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ebb98 00:17:45.131 [2024-11-12 10:39:33.763169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.131 [2024-11-12 10:39:33.763213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:45.131 [2024-11-12 10:39:33.776801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ec408 00:17:45.131 [2024-11-12 10:39:33.778795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.131 [2024-11-12 10:39:33.778844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:45.131 [2024-11-12 10:39:33.792667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ecc78 00:17:45.131 [2024-11-12 10:39:33.794732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.131 [2024-11-12 10:39:33.794766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:45.131 [2024-11-12 10:39:33.809370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ed4e8 00:17:45.131 [2024-11-12 10:39:33.811384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.131 [2024-11-12 10:39:33.811421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:45.131 [2024-11-12 10:39:33.825146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166edd58 00:17:45.131 [2024-11-12 10:39:33.827187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.131 [2024-11-12 10:39:33.827232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:45.131 [2024-11-12 10:39:33.841434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ee5c8 00:17:45.131 [2024-11-12 10:39:33.843407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.131 [2024-11-12 10:39:33.843487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:45.131 [2024-11-12 10:39:33.856285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166eee38 00:17:45.131 [2024-11-12 10:39:33.858215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.131 [2024-11-12 10:39:33.858274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:45.131 [2024-11-12 10:39:33.870831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166ef6a8 00:17:45.131 [2024-11-12 10:39:33.872734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.131 [2024-11-12 10:39:33.872765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:45.131 [2024-11-12 10:39:33.885571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166eff18 00:17:45.390 [2024-11-12 10:39:33.887491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.390 [2024-11-12 10:39:33.887556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:45.390 [2024-11-12 10:39:33.900509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f0788 00:17:45.390 [2024-11-12 10:39:33.902245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.390 [2024-11-12 10:39:33.902292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:45.390 [2024-11-12 10:39:33.914835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f0ff8 00:17:45.390 [2024-11-12 10:39:33.916609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.390 [2024-11-12 10:39:33.916639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:45.390 [2024-11-12 10:39:33.928999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f1868 00:17:45.390 [2024-11-12 10:39:33.930746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.390 [2024-11-12 10:39:33.930791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:45.390 [2024-11-12 10:39:33.943175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f20d8 00:17:45.390 [2024-11-12 10:39:33.944835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.390 [2024-11-12 10:39:33.944865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:45.390 [2024-11-12 10:39:33.957397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f2948 00:17:45.390 [2024-11-12 10:39:33.959025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.390 [2024-11-12 10:39:33.959070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:45.390 [2024-11-12 10:39:33.971559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f31b8 00:17:45.390 [2024-11-12 10:39:33.973223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.390 [2024-11-12 10:39:33.973277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:45.390 [2024-11-12 10:39:33.985674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f3a28 00:17:45.390 [2024-11-12 10:39:33.987300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.390 [2024-11-12 10:39:33.987333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:45.390 [2024-11-12 10:39:33.999962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f4298 00:17:45.390 [2024-11-12 10:39:34.001577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.390 [2024-11-12 10:39:34.001639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:45.390 [2024-11-12 10:39:34.014223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f4b08 00:17:45.390 [2024-11-12 10:39:34.015875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.390 [2024-11-12 10:39:34.015921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:45.390 [2024-11-12 10:39:34.028400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f5378 00:17:45.390 [2024-11-12 10:39:34.029983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.391 [2024-11-12 10:39:34.030030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:45.391 [2024-11-12 10:39:34.042531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f5be8 00:17:45.391 [2024-11-12 10:39:34.044138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.391 [2024-11-12 10:39:34.044207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:45.391 [2024-11-12 10:39:34.056831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f6458 00:17:45.391 [2024-11-12 10:39:34.058361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.391 [2024-11-12 10:39:34.058407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:45.391 [2024-11-12 10:39:34.070959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f6cc8 00:17:45.391 [2024-11-12 10:39:34.072546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.391 [2024-11-12 10:39:34.072578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:45.391 [2024-11-12 10:39:34.085133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f7538 00:17:45.391 [2024-11-12 10:39:34.086639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.391 [2024-11-12 10:39:34.086685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:45.391 [2024-11-12 10:39:34.099271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f7da8 00:17:45.391 [2024-11-12 10:39:34.100780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.391 [2024-11-12 10:39:34.100811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:45.391 [2024-11-12 10:39:34.113578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f8618 00:17:45.391 [2024-11-12 10:39:34.115045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.391 [2024-11-12 10:39:34.115091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:45.391 [2024-11-12 10:39:34.127958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f8e88 00:17:45.391 [2024-11-12 10:39:34.129409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.391 [2024-11-12 10:39:34.129455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:45.391 [2024-11-12 10:39:34.142085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f96f8 00:17:45.391 [2024-11-12 10:39:34.143773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.391 [2024-11-12 10:39:34.143818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.157283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166f9f68 00:17:45.650 [2024-11-12 10:39:34.158724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.158771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.171506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fa7d8 00:17:45.650 [2024-11-12 10:39:34.172914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.172976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.185629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fb048 00:17:45.650 [2024-11-12 10:39:34.187036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.187081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.200348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fb8b8 00:17:45.650 [2024-11-12 10:39:34.201690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.201735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.214430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fc128 00:17:45.650 [2024-11-12 10:39:34.215852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.215898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.228609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fc998 00:17:45.650 [2024-11-12 10:39:34.229907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.229954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.242646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fd208 00:17:45.650 [2024-11-12 10:39:34.244049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.244079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.256839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fda78 00:17:45.650 [2024-11-12 10:39:34.258107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.258153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.271968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fe2e8 00:17:45.650 [2024-11-12 10:39:34.273279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.273325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.286041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166feb58 00:17:45.650 [2024-11-12 10:39:34.287397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.287431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.306295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fef90 00:17:45.650 [2024-11-12 10:39:34.308692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.308739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.320549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166feb58 00:17:45.650 [2024-11-12 10:39:34.322816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.322847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.334603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fe2e8 00:17:45.650 [2024-11-12 10:39:34.336875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.336921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.348721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fda78 00:17:45.650 [2024-11-12 10:39:34.350976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.351022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.362698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fd208 00:17:45.650 [2024-11-12 10:39:34.364980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.650 [2024-11-12 10:39:34.365026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:45.650 [2024-11-12 10:39:34.377045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fc998 00:17:45.650 [2024-11-12 10:39:34.379433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.651 [2024-11-12 10:39:34.379495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:45.651 [2024-11-12 10:39:34.390397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90850) with pdu=0x2000166fc128 00:17:45.651 17079.00 IOPS, 66.71 MiB/s [2024-11-12T10:39:34.409Z] [2024-11-12 10:39:34.392009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.651 [2024-11-12 10:39:34.392057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:45.651 00:17:45.651 Latency(us) 00:17:45.651 [2024-11-12T10:39:34.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.651 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.651 nvme0n1 : 2.01 17093.67 66.77 0.00 0.00 7479.96 5481.19 30027.40 00:17:45.651 [2024-11-12T10:39:34.409Z] =================================================================================================================== 00:17:45.651 [2024-11-12T10:39:34.409Z] Total : 17093.67 66.77 0.00 0.00 7479.96 5481.19 30027.40 00:17:45.651 { 00:17:45.651 "results": [ 00:17:45.651 { 00:17:45.651 "job": "nvme0n1", 00:17:45.651 "core_mask": "0x2", 00:17:45.651 "workload": "randwrite", 00:17:45.651 "status": "finished", 00:17:45.651 "queue_depth": 128, 00:17:45.651 "io_size": 4096, 00:17:45.651 "runtime": 2.005772, 00:17:45.651 "iops": 17093.66767508969, 00:17:45.651 "mibps": 66.7721393558191, 00:17:45.651 "io_failed": 0, 00:17:45.651 "io_timeout": 0, 00:17:45.651 "avg_latency_us": 7479.963747938464, 00:17:45.651 "min_latency_us": 5481.192727272727, 00:17:45.651 "max_latency_us": 30027.403636363637 00:17:45.651 } 00:17:45.651 ], 00:17:45.651 "core_count": 1 00:17:45.651 } 00:17:45.910 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:45.910 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:45.910 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:45.910 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:45.910 | .driver_specific 00:17:45.910 | .nvme_error 00:17:45.910 | .status_code 00:17:45.910 | .command_transient_transport_error' 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 134 > 0 )) 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79894 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 79894 ']' 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 79894 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79894 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:46.169 killing process with pid 79894 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79894' 00:17:46.169 Received shutdown signal, test time was about 2.000000 seconds 00:17:46.169 00:17:46.169 Latency(us) 00:17:46.169 [2024-11-12T10:39:34.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.169 [2024-11-12T10:39:34.927Z] =================================================================================================================== 00:17:46.169 [2024-11-12T10:39:34.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 79894 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 79894 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79946 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79946 /var/tmp/bperf.sock 00:17:46.169 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 79946 ']' 00:17:46.170 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:46.170 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:46.170 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:46.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:46.170 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:46.170 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:46.170 10:39:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:46.170 [2024-11-12 10:39:34.919084] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:17:46.170 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:46.170 Zero copy mechanism will not be used. 00:17:46.170 [2024-11-12 10:39:34.919246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79946 ] 00:17:46.429 [2024-11-12 10:39:35.062912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.429 [2024-11-12 10:39:35.091965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.429 [2024-11-12 10:39:35.119473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:46.688 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:46.688 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:17:46.688 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:46.688 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:46.946 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:46.946 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.946 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:46.946 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.946 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.947 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:47.206 nvme0n1 00:17:47.206 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:47.206 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.206 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:47.206 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.206 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:47.206 10:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:47.206 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:47.206 Zero copy mechanism will not be used. 00:17:47.206 Running I/O for 2 seconds... 00:17:47.206 [2024-11-12 10:39:35.880719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.881064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.881104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.885618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.885959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.886006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.890399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.890723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.890760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.895177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.895486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.895523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.899939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.900274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.900317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.904806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.905170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.905213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.909712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.910048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.910091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.914571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.914921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.914963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.919377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.919716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.919754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.924191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.924540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.924583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.928979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.929335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.929367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.933870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.934200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.934241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.938725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.939060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.939093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.943586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.943923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.943956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.948475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.948789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.948835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.953325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.206 [2024-11-12 10:39:35.953665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.206 [2024-11-12 10:39:35.953696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.206 [2024-11-12 10:39:35.958289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.207 [2024-11-12 10:39:35.958665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.207 [2024-11-12 10:39:35.958705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:35.963729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:35.964056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:35.964093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:35.968901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:35.969258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:35.969307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:35.973783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:35.974110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:35.974147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:35.978668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:35.979010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:35.979052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:35.983571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:35.983911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:35.983944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:35.988311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:35.988649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:35.988680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:35.993069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:35.993436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:35.993478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:35.997870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:35.998208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:35.998253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:36.003014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:36.003391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:36.003422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:36.008315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:36.008696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:36.008728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:36.013407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:36.013781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:36.013819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:36.018645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:36.018986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:36.019023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:36.023901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:36.024258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:36.024309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:36.029127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:36.029513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:36.029551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:36.034401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:36.034787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:36.034825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:36.039559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:36.039898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.467 [2024-11-12 10:39:36.039933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.467 [2024-11-12 10:39:36.044939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.467 [2024-11-12 10:39:36.045294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.045341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.050344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.050698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.050737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.055661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.056049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.056088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.060990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.061348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.061381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.066168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.066553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.066591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.071252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.071596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.071634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.076401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.076740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.076778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.081495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.081844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.081883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.086792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.087145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.087192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.091679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.092023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.092062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.097021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.097387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.097422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.102276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.102622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.102670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.107486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.107860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.107901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.112714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.113032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.113070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.117737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.118088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.118128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.122658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.123012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.123051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.128011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.128400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.128438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.133050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.133411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.133446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.138125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.138497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.138536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.143180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.143517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.143555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.148246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.148610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.148648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.153288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.153621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.153654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.158338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.158684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.158725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.468 [2024-11-12 10:39:36.163412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.468 [2024-11-12 10:39:36.163810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.468 [2024-11-12 10:39:36.163849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.469 [2024-11-12 10:39:36.168948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.469 [2024-11-12 10:39:36.169323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.469 [2024-11-12 10:39:36.169360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.469 [2024-11-12 10:39:36.174059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.469 [2024-11-12 10:39:36.174422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.469 [2024-11-12 10:39:36.174454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.469 [2024-11-12 10:39:36.179245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.469 [2024-11-12 10:39:36.179600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.469 [2024-11-12 10:39:36.179637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.469 [2024-11-12 10:39:36.184406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.469 [2024-11-12 10:39:36.184775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.469 [2024-11-12 10:39:36.184813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.469 [2024-11-12 10:39:36.189448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.469 [2024-11-12 10:39:36.189798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.469 [2024-11-12 10:39:36.189841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.469 [2024-11-12 10:39:36.194575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.469 [2024-11-12 10:39:36.194942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.469 [2024-11-12 10:39:36.194980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.469 [2024-11-12 10:39:36.199758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.469 [2024-11-12 10:39:36.200093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.469 [2024-11-12 10:39:36.200130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.469 [2024-11-12 10:39:36.204700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.469 [2024-11-12 10:39:36.205042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.469 [2024-11-12 10:39:36.205076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.469 [2024-11-12 10:39:36.209986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.469 [2024-11-12 10:39:36.210338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.469 [2024-11-12 10:39:36.210371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.469 [2024-11-12 10:39:36.214943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.469 [2024-11-12 10:39:36.215307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.469 [2024-11-12 10:39:36.215340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.469 [2024-11-12 10:39:36.220232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.469 [2024-11-12 10:39:36.220606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.469 [2024-11-12 10:39:36.220643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.225248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.225607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.225644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.230217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.230554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.230598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.235019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.235398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.235443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.239862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.240191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.240238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.244563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.244902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.244935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.249331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.249713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.249750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.254109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.254471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.254507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.258846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.259211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.259254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.263657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.263974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.264011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.268450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.268782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.268815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.273259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.273600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.273657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.278007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.278350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.278384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.282715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.755 [2024-11-12 10:39:36.283053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.755 [2024-11-12 10:39:36.283084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.755 [2024-11-12 10:39:36.287422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.287825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.287864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.292235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.292585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.292625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.297024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.297393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.297429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.301804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.302145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.302211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.306655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.307000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.307040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.311443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.311801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.311838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.316148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.316510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.316546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.320908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.321252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.321293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.325753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.326086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.326132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.330485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.330816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.330860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.335322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.335671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.335712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.340097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.340438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.340481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.344856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.345212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.345258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.349648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.349989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.350021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.354464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.354833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.354871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.359253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.359605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.359642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.364018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.364364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.364396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.368870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.369228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.369268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.373625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.373963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.374009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.378289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.378628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.378674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.382936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.383304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.383338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.387674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.388000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.388037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.392421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.392758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.392790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.397249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.397586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.397633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.402066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.402413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.402460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.406950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.407304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.407337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.411794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.412123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.756 [2024-11-12 10:39:36.412161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.756 [2024-11-12 10:39:36.416537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.756 [2024-11-12 10:39:36.416874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.416906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.421351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.421711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.421751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.426059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.426412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.426444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.430783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.431148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.431192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.435544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.435864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.435908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.440287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.440627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.440658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.445002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.445372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.445415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.449859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.450197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.450239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.454592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.454929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.454973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.459333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.459713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.459748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.464043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.464391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.464422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.468840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.469180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.469235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.473639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.473980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.474012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.478673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.479014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.479054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.484368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.484692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.484731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.757 [2024-11-12 10:39:36.489952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:47.757 [2024-11-12 10:39:36.490325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.757 [2024-11-12 10:39:36.490363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.053 [2024-11-12 10:39:36.495347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.053 [2024-11-12 10:39:36.495666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.053 [2024-11-12 10:39:36.495704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.053 [2024-11-12 10:39:36.500598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.053 [2024-11-12 10:39:36.500941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.053 [2024-11-12 10:39:36.500979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.053 [2024-11-12 10:39:36.505709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.053 [2024-11-12 10:39:36.506051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.053 [2024-11-12 10:39:36.506089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.053 [2024-11-12 10:39:36.510958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.053 [2024-11-12 10:39:36.511326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.053 [2024-11-12 10:39:36.511361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.053 [2024-11-12 10:39:36.515849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.053 [2024-11-12 10:39:36.516184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.053 [2024-11-12 10:39:36.516232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.053 [2024-11-12 10:39:36.520797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.053 [2024-11-12 10:39:36.521159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.053 [2024-11-12 10:39:36.521223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.053 [2024-11-12 10:39:36.526338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.053 [2024-11-12 10:39:36.526641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.053 [2024-11-12 10:39:36.526685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.053 [2024-11-12 10:39:36.531860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.053 [2024-11-12 10:39:36.532163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.053 [2024-11-12 10:39:36.532213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.053 [2024-11-12 10:39:36.537106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.053 [2024-11-12 10:39:36.537480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.053 [2024-11-12 10:39:36.537518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.053 [2024-11-12 10:39:36.542054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.053 [2024-11-12 10:39:36.542413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.053 [2024-11-12 10:39:36.542445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.053 [2024-11-12 10:39:36.546856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.053 [2024-11-12 10:39:36.547200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.547246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.551746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.552083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.552120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.556604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.556934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.556979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.561488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.561837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.561873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.566524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.566878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.566920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.571866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.572231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.572277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.577232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.577587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.577624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.582801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.583166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.583210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.587991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.588343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.588400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.593120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.593488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.593526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.598306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.598690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.598727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.603363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.603733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.603770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.608536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.608927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.608969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.613896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.614253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.614315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.618755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.619117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.619166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.623828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.624154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.624204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.628530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.628869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.628902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.633372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.633734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.633770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.638099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.638451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.638488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.642793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.643161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.643208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.647570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.647894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.647932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.652238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.652588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.652632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.657490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.657876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.657930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.662573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.662907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.662946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.667860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.668230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.668274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.673493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.673822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.673857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.054 [2024-11-12 10:39:36.678862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.054 [2024-11-12 10:39:36.679260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.054 [2024-11-12 10:39:36.679295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.684127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.684481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.684549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.689268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.689609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.689647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.694348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.694701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.694738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.699281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.699667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.699703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.704066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.704419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.704451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.708955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.709317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.709350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.713747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.714096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.714134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.718521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.718891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.718929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.723353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.723729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.723765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.728067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.728419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.728450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.732863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.733218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.733260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.737697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.738026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.738061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.742431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.742775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.742818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.747278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.747651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.747687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.752068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.752407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.752438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.756911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.757256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.757301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.761769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.762098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.762130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.766491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.766819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.766853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.771243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.771602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.771638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.776027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.776366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.776396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.780804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.781146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.781210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.785585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.785943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.785981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.790321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.790658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.790701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.795337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.055 [2024-11-12 10:39:36.795670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.055 [2024-11-12 10:39:36.795709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.055 [2024-11-12 10:39:36.800748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.056 [2024-11-12 10:39:36.801082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.056 [2024-11-12 10:39:36.801125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.806253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.806571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.806609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.811458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.811817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.811857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.816783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.817101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.817140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.822176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.822529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.822567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.827309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.827692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.827729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.832316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.832654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.832694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.837067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.837438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.837474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.841872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.842213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.842256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.846696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.847034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.847067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.851494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.851823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.851859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.856197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.856548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.856585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.860988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.861360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.861406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.865853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.866191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.866232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.327 [2024-11-12 10:39:36.870692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.327 [2024-11-12 10:39:36.871032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.327 [2024-11-12 10:39:36.871080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.875536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 6230.00 IOPS, 778.75 MiB/s [2024-11-12T10:39:37.086Z] [2024-11-12 10:39:36.877040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.877089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.881609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.881950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.881973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.886379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.886706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.886737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.891235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.891581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.891617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.896029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.896368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.896407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.900907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.901253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.901295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.905778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.906125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.906157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.910611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.910950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.910995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.915533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.915857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.915899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.920150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.920511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.920547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.924938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.925288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.925320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.929838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.930212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.930263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.934761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.935100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.935158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.939504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.939839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.939871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.944223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.944576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.944607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.949089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.949438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.949475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.953903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.954266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.954313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.958792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.959159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.959216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.963709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.964032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.964070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.968592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.968929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.968972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.973438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.973766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.973798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.978225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.978561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.978621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.982960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.983338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.983372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.987754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.988081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.328 [2024-11-12 10:39:36.988118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.328 [2024-11-12 10:39:36.992535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.328 [2024-11-12 10:39:36.992864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:36.992896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:36.997336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:36.997685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:36.997721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.002304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.002637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.002675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.007353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.007732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.007769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.012340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.012705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.012741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.017269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.017590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.017627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.022156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.022496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.022530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.026913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.027268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.027299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.031664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.031953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.031979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.036375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.036645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.036670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.040981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.041282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.041307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.045620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.045888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.045914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.050291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.050567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.050592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.055051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.055406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.055450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.059818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.060287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.060310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.064804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.065072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.065097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.069423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.069696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.069721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.329 [2024-11-12 10:39:37.075063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.329 [2024-11-12 10:39:37.075406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.329 [2024-11-12 10:39:37.075465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.080544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.596 [2024-11-12 10:39:37.080844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.596 [2024-11-12 10:39:37.080872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.085770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.596 [2024-11-12 10:39:37.086073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.596 [2024-11-12 10:39:37.086101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.090856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.596 [2024-11-12 10:39:37.091158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.596 [2024-11-12 10:39:37.091196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.095727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.596 [2024-11-12 10:39:37.096203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.596 [2024-11-12 10:39:37.096261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.100737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.596 [2024-11-12 10:39:37.101016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.596 [2024-11-12 10:39:37.101041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.105576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.596 [2024-11-12 10:39:37.105855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.596 [2024-11-12 10:39:37.105881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.110377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.596 [2024-11-12 10:39:37.110653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.596 [2024-11-12 10:39:37.110677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.115019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.596 [2024-11-12 10:39:37.115360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.596 [2024-11-12 10:39:37.115392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.119900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.596 [2024-11-12 10:39:37.120373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.596 [2024-11-12 10:39:37.120403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.124848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.596 [2024-11-12 10:39:37.125119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.596 [2024-11-12 10:39:37.125144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.129537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.596 [2024-11-12 10:39:37.129805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.596 [2024-11-12 10:39:37.129830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.134251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.596 [2024-11-12 10:39:37.134578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.596 [2024-11-12 10:39:37.134615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.596 [2024-11-12 10:39:37.138942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.139287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.139309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.143704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.144157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.144197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.148570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.148841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.148867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.153195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.153476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.153500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.157841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.158112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.158137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.162448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.162715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.162740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.167061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.167387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.167418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.171762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.172195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.172236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.176578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.176858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.176884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.181279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.181547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.181571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.185894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.186163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.186196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.190519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.190789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.190814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.195520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.195882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.195945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.200624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.200898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.200924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.205652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.205926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.205951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.211028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.211486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.211518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.216605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.216939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.216980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.222110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.222483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.222516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.227734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.228063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.228090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.233159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.233493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.233526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.238468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.238798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.238825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.243870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.244152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.244203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.249006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.249346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.249378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.254072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.254575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.254637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.259629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.259924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.259950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.264679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.264974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.265001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.269662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.270130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.597 [2024-11-12 10:39:37.270164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.597 [2024-11-12 10:39:37.275171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.597 [2024-11-12 10:39:37.275503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.275530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.280152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.280529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.280562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.285085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.285567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.285613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.290560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.290869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.290897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.295638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.295928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.295954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.300707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.301173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.301232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.305924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.306266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.306309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.311070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.311419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.311464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.316175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.316659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.316690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.321381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.321706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.321732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.326719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.327022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.327049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.331672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.332138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.332170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.336801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.337088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.337115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.342113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.342491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.342529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.598 [2024-11-12 10:39:37.347283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.598 [2024-11-12 10:39:37.347639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.598 [2024-11-12 10:39:37.347667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.352791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.353121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.353154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.358381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.358696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.358723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.363319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.363646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.363672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.368386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.368668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.368693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.373544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.373841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.373867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.378427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.378711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.378736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.383599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.383930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.383956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.388804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.389087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.389113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.393757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.394230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.394289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.399188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.399551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.399577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.404098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.404415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.404445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.409067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.409387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.409417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.413817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.414279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.414308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.418693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.418964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.418989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.423628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.423900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.423925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.428335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.428622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.428647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.432991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.433305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.433335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.437757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.438209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.438232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.442710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.442979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.443004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.447577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.447852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.447878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.452305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.452600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.452625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.457051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.457386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.858 [2024-11-12 10:39:37.457417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.858 [2024-11-12 10:39:37.461805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.858 [2024-11-12 10:39:37.462248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.462297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.466723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.466995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.467019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.471394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.471681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.471705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.476060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.476381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.476410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.480843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.481127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.481152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.485781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.486222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.486263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.490736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.491008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.491033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.495566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.495843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.495869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.500378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.500670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.500695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.505080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.505421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.505450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.509919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.510396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.510427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.514789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.515062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.515086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.519532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.519803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.519828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.524332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.524622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.524647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.528989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.529306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.529352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.533739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.534186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.534228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.538852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.539169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.539208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.543801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.544071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.544095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.548580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.548857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.548883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.553417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.553706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.553732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.558177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.558503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.558532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.563189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.563543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.563585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.568441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.568814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.568841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.573327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.573617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.573642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.578051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.578373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.578402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.582796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.583082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.583132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.587926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.588448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.588480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.859 [2024-11-12 10:39:37.593320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.859 [2024-11-12 10:39:37.593624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.859 [2024-11-12 10:39:37.593649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.860 [2024-11-12 10:39:37.598514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.860 [2024-11-12 10:39:37.598828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.860 [2024-11-12 10:39:37.598857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.860 [2024-11-12 10:39:37.604012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.860 [2024-11-12 10:39:37.604393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.860 [2024-11-12 10:39:37.604431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.860 [2024-11-12 10:39:37.609418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:48.860 [2024-11-12 10:39:37.609797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.860 [2024-11-12 10:39:37.609823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.614975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.615311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.615338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.620217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.620727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.620758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.625522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.625822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.625846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.630426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.630697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.630722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.635071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.635421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.635468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.640179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.640674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.640713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.645405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.645729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.645756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.650323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.650617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.650642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.655113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.655477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.655606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.660190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.660695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.660955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.665719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.666149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.666329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.670798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.671316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.671493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.676364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.676879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.677152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.681979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.682492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.682741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.687987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.688478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.688672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.694099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.694617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.694779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.699912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.700380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.700550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.705639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.706118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.706165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.710965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.711297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.120 [2024-11-12 10:39:37.711330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.120 [2024-11-12 10:39:37.716048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.120 [2024-11-12 10:39:37.716526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.716588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.721120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.721456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.721485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.725872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.726143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.726168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.730499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.730767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.730792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.735184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.735565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.735601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.740040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.740530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.740575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.745008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.745291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.745315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.749720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.749990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.750015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.754384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.754654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.754679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.759080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.759418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.759450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.763856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.764307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.764356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.769023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.769350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.769385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.773852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.774132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.774157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.778558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.778827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.778851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.783190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.783543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.783580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.787996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.788469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.788499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.792867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.793155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.793188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.797551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.797820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.797844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.802242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.802511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.802535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.806966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.807326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.807359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.811796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.812241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.812288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.816751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.817049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.817075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.821453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.821720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.821745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.826151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.826475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.826506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.830926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.831257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.831296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.835750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.836193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.836235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.840672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.840966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.121 [2024-11-12 10:39:37.840990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.121 [2024-11-12 10:39:37.845317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.121 [2024-11-12 10:39:37.845622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.122 [2024-11-12 10:39:37.845648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.122 [2024-11-12 10:39:37.850041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.122 [2024-11-12 10:39:37.850367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.122 [2024-11-12 10:39:37.850397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.122 [2024-11-12 10:39:37.854771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.122 [2024-11-12 10:39:37.855050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.122 [2024-11-12 10:39:37.855076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.122 [2024-11-12 10:39:37.859479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.122 [2024-11-12 10:39:37.859776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.122 [2024-11-12 10:39:37.859802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.122 [2024-11-12 10:39:37.864228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.122 [2024-11-12 10:39:37.864538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.122 [2024-11-12 10:39:37.864565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.122 [2024-11-12 10:39:37.868948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.122 [2024-11-12 10:39:37.869231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.122 [2024-11-12 10:39:37.869267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.122 [2024-11-12 10:39:37.874115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b90b90) with pdu=0x2000166fef90 00:17:49.122 [2024-11-12 10:39:37.874415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.122 [2024-11-12 10:39:37.874440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:49.381 6233.00 IOPS, 779.12 MiB/s 00:17:49.381 Latency(us) 00:17:49.381 [2024-11-12T10:39:38.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.381 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:49.381 nvme0n1 : 2.00 6232.00 779.00 0.00 0.00 2561.96 1362.85 6106.76 00:17:49.381 [2024-11-12T10:39:38.139Z] =================================================================================================================== 00:17:49.381 [2024-11-12T10:39:38.139Z] Total : 6232.00 779.00 0.00 0.00 2561.96 1362.85 6106.76 00:17:49.381 { 00:17:49.381 "results": [ 00:17:49.381 { 00:17:49.381 "job": "nvme0n1", 00:17:49.381 "core_mask": "0x2", 00:17:49.381 "workload": "randwrite", 00:17:49.381 "status": "finished", 00:17:49.381 "queue_depth": 16, 00:17:49.381 "io_size": 131072, 00:17:49.381 "runtime": 2.004011, 00:17:49.381 "iops": 6232.001720549438, 00:17:49.381 "mibps": 779.0002150686797, 00:17:49.381 "io_failed": 0, 00:17:49.381 "io_timeout": 0, 00:17:49.381 "avg_latency_us": 2561.961997685236, 00:17:49.381 "min_latency_us": 1362.850909090909, 00:17:49.381 "max_latency_us": 6106.763636363637 00:17:49.381 } 00:17:49.381 ], 00:17:49.381 "core_count": 1 00:17:49.381 } 00:17:49.381 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:49.381 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:49.381 | .driver_specific 00:17:49.381 | .nvme_error 00:17:49.381 | .status_code 00:17:49.381 | .command_transient_transport_error' 00:17:49.381 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:49.381 10:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 402 > 0 )) 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79946 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 79946 ']' 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 79946 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79946 00:17:49.640 killing process with pid 79946 00:17:49.640 Received shutdown signal, test time was about 2.000000 seconds 00:17:49.640 00:17:49.640 Latency(us) 00:17:49.640 [2024-11-12T10:39:38.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.640 [2024-11-12T10:39:38.398Z] =================================================================================================================== 00:17:49.640 [2024-11-12T10:39:38.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79946' 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 79946 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 79946 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79762 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 79762 ']' 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 79762 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:17:49.640 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:49.641 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79762 00:17:49.899 killing process with pid 79762 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79762' 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 79762 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 79762 00:17:49.899 00:17:49.899 real 0m15.288s 00:17:49.899 user 0m29.374s 00:17:49.899 sys 0m4.242s 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:49.899 ************************************ 00:17:49.899 END TEST nvmf_digest_error 00:17:49.899 ************************************ 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:49.899 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:49.899 rmmod nvme_tcp 00:17:50.158 rmmod nvme_fabrics 00:17:50.158 rmmod nvme_keyring 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79762 ']' 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79762 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 79762 ']' 00:17:50.158 Process with pid 79762 is not found 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 79762 00:17:50.158 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (79762) - No such process 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 79762 is not found' 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:50.158 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:50.159 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:50.159 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:50.159 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:50.159 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:50.159 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.159 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.417 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:50.417 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.417 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.417 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.417 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:17:50.417 ************************************ 00:17:50.417 END TEST nvmf_digest 00:17:50.417 ************************************ 00:17:50.417 00:17:50.417 real 0m32.422s 00:17:50.417 user 1m1.419s 00:17:50.417 sys 0m8.870s 00:17:50.417 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:50.417 10:39:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:50.417 10:39:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:17:50.417 10:39:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:17:50.417 10:39:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:50.418 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:50.418 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:50.418 10:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.418 ************************************ 00:17:50.418 START TEST nvmf_host_multipath 00:17:50.418 ************************************ 00:17:50.418 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:50.418 * Looking for test storage... 00:17:50.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:50.418 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:50.418 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:17:50.418 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:50.677 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:50.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.678 --rc genhtml_branch_coverage=1 00:17:50.678 --rc genhtml_function_coverage=1 00:17:50.678 --rc genhtml_legend=1 00:17:50.678 --rc geninfo_all_blocks=1 00:17:50.678 --rc geninfo_unexecuted_blocks=1 00:17:50.678 00:17:50.678 ' 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:50.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.678 --rc genhtml_branch_coverage=1 00:17:50.678 --rc genhtml_function_coverage=1 00:17:50.678 --rc genhtml_legend=1 00:17:50.678 --rc geninfo_all_blocks=1 00:17:50.678 --rc geninfo_unexecuted_blocks=1 00:17:50.678 00:17:50.678 ' 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:50.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.678 --rc genhtml_branch_coverage=1 00:17:50.678 --rc genhtml_function_coverage=1 00:17:50.678 --rc genhtml_legend=1 00:17:50.678 --rc geninfo_all_blocks=1 00:17:50.678 --rc geninfo_unexecuted_blocks=1 00:17:50.678 00:17:50.678 ' 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:50.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.678 --rc genhtml_branch_coverage=1 00:17:50.678 --rc genhtml_function_coverage=1 00:17:50.678 --rc genhtml_legend=1 00:17:50.678 --rc geninfo_all_blocks=1 00:17:50.678 --rc geninfo_unexecuted_blocks=1 00:17:50.678 00:17:50.678 ' 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:50.678 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:50.678 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:50.679 Cannot find device "nvmf_init_br" 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:50.679 Cannot find device "nvmf_init_br2" 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:50.679 Cannot find device "nvmf_tgt_br" 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.679 Cannot find device "nvmf_tgt_br2" 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:50.679 Cannot find device "nvmf_init_br" 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:50.679 Cannot find device "nvmf_init_br2" 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:50.679 Cannot find device "nvmf_tgt_br" 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:50.679 Cannot find device "nvmf_tgt_br2" 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:50.679 Cannot find device "nvmf_br" 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:50.679 Cannot find device "nvmf_init_if" 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:50.679 Cannot find device "nvmf_init_if2" 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:50.679 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:50.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:50.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:17:50.940 00:17:50.940 --- 10.0.0.3 ping statistics --- 00:17:50.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.940 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:50.940 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:50.940 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:17:50.940 00:17:50.940 --- 10.0.0.4 ping statistics --- 00:17:50.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.940 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:50.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:17:50.940 00:17:50.940 --- 10.0.0.1 ping statistics --- 00:17:50.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.940 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:50.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:17:50.940 00:17:50.940 --- 10.0.0.2 ping statistics --- 00:17:50.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.940 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:50.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80263 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80263 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 80263 ']' 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:50.940 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:50.940 [2024-11-12 10:39:39.672031] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:17:50.940 [2024-11-12 10:39:39.672899] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.198 [2024-11-12 10:39:39.825566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:51.198 [2024-11-12 10:39:39.866435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.198 [2024-11-12 10:39:39.866727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.198 [2024-11-12 10:39:39.866961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.198 [2024-11-12 10:39:39.867119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.198 [2024-11-12 10:39:39.867277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.198 [2024-11-12 10:39:39.868238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.198 [2024-11-12 10:39:39.868252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.198 [2024-11-12 10:39:39.904119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:51.456 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:51.456 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:17:51.456 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:51.456 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.456 10:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:51.456 10:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.456 10:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80263 00:17:51.456 10:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:51.714 [2024-11-12 10:39:40.272634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.714 10:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:51.973 Malloc0 00:17:51.973 10:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:52.239 10:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:52.498 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:52.756 [2024-11-12 10:39:41.426476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:52.756 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:53.015 [2024-11-12 10:39:41.658553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:53.015 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80307 00:17:53.015 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:53.015 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:53.015 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80307 /var/tmp/bdevperf.sock 00:17:53.015 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 80307 ']' 00:17:53.015 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.015 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:53.015 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.015 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:53.015 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:53.273 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:53.273 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:17:53.273 10:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:53.532 10:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:54.099 Nvme0n1 00:17:54.099 10:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:54.358 Nvme0n1 00:17:54.358 10:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:54.358 10:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:55.294 10:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:55.294 10:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:55.552 10:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:55.811 10:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:55.811 10:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80349 00:17:55.811 10:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80263 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:55.811 10:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:02.373 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:02.373 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:02.373 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:02.373 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:02.374 Attaching 4 probes... 00:18:02.374 @path[10.0.0.3, 4421]: 19265 00:18:02.374 @path[10.0.0.3, 4421]: 19631 00:18:02.374 @path[10.0.0.3, 4421]: 19648 00:18:02.374 @path[10.0.0.3, 4421]: 20068 00:18:02.374 @path[10.0.0.3, 4421]: 19573 00:18:02.374 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:02.374 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:02.374 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:02.374 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:02.374 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:02.374 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:02.374 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80349 00:18:02.374 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:02.374 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:02.374 10:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:02.374 10:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:02.632 10:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:02.632 10:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80263 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:02.632 10:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80464 00:18:02.632 10:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:09.192 Attaching 4 probes... 00:18:09.192 @path[10.0.0.3, 4420]: 19376 00:18:09.192 @path[10.0.0.3, 4420]: 19456 00:18:09.192 @path[10.0.0.3, 4420]: 19656 00:18:09.192 @path[10.0.0.3, 4420]: 19621 00:18:09.192 @path[10.0.0.3, 4420]: 19546 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80464 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:09.192 10:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:09.450 10:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:09.450 10:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80582 00:18:09.450 10:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80263 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:09.450 10:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:16.013 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:16.013 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:16.013 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:16.013 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:16.013 Attaching 4 probes... 00:18:16.013 @path[10.0.0.3, 4421]: 15383 00:18:16.013 @path[10.0.0.3, 4421]: 18934 00:18:16.013 @path[10.0.0.3, 4421]: 19359 00:18:16.013 @path[10.0.0.3, 4421]: 19344 00:18:16.013 @path[10.0.0.3, 4421]: 19382 00:18:16.013 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:16.013 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:16.014 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:16.014 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:16.014 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:16.014 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:16.014 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80582 00:18:16.014 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:16.014 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:16.014 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:16.272 10:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:16.530 10:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:16.530 10:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80694 00:18:16.530 10:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80263 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:16.530 10:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:23.093 Attaching 4 probes... 00:18:23.093 00:18:23.093 00:18:23.093 00:18:23.093 00:18:23.093 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80694 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:23.093 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:23.358 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:23.359 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80263 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:23.359 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80812 00:18:23.359 10:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:29.929 10:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:29.929 10:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:29.929 10:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:29.929 10:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:29.929 Attaching 4 probes... 00:18:29.929 @path[10.0.0.3, 4421]: 18471 00:18:29.929 @path[10.0.0.3, 4421]: 19129 00:18:29.929 @path[10.0.0.3, 4421]: 18391 00:18:29.929 @path[10.0.0.3, 4421]: 18066 00:18:29.929 @path[10.0.0.3, 4421]: 17152 00:18:29.929 10:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:29.929 10:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:29.929 10:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:29.929 10:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:29.929 10:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:29.929 10:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:29.929 10:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80812 00:18:29.929 10:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:29.929 10:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:29.929 [2024-11-12 10:40:18.437009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4810 is same with the state(6) to be set 00:18:29.929 [2024-11-12 10:40:18.437089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4810 is same with the state(6) to be set 00:18:29.929 [2024-11-12 10:40:18.437118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4810 is same with the state(6) to be set 00:18:29.929 [2024-11-12 10:40:18.437125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4810 is same with the state(6) to be set 00:18:29.929 [2024-11-12 10:40:18.437133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4810 is same with the state(6) to be set 00:18:29.929 10:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:30.865 10:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:30.865 10:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80930 00:18:30.865 10:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:30.865 10:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80263 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:37.431 Attaching 4 probes... 00:18:37.431 @path[10.0.0.3, 4420]: 18711 00:18:37.431 @path[10.0.0.3, 4420]: 18987 00:18:37.431 @path[10.0.0.3, 4420]: 18547 00:18:37.431 @path[10.0.0.3, 4420]: 18727 00:18:37.431 @path[10.0.0.3, 4420]: 18968 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80930 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:37.431 10:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:37.431 [2024-11-12 10:40:25.998681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:37.431 10:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:37.691 10:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:44.259 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:44.259 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81110 00:18:44.259 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80263 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:44.259 10:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:49.531 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:49.531 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:50.115 Attaching 4 probes... 00:18:50.115 @path[10.0.0.3, 4421]: 18157 00:18:50.115 @path[10.0.0.3, 4421]: 19137 00:18:50.115 @path[10.0.0.3, 4421]: 18981 00:18:50.115 @path[10.0.0.3, 4421]: 19048 00:18:50.115 @path[10.0.0.3, 4421]: 19608 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81110 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80307 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 80307 ']' 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 80307 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80307 00:18:50.115 killing process with pid 80307 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80307' 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 80307 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 80307 00:18:50.115 { 00:18:50.115 "results": [ 00:18:50.115 { 00:18:50.115 "job": "Nvme0n1", 00:18:50.115 "core_mask": "0x4", 00:18:50.115 "workload": "verify", 00:18:50.115 "status": "terminated", 00:18:50.115 "verify_range": { 00:18:50.115 "start": 0, 00:18:50.115 "length": 16384 00:18:50.115 }, 00:18:50.115 "queue_depth": 128, 00:18:50.115 "io_size": 4096, 00:18:50.115 "runtime": 55.603046, 00:18:50.115 "iops": 8078.352398176172, 00:18:50.115 "mibps": 31.55606405537567, 00:18:50.115 "io_failed": 0, 00:18:50.115 "io_timeout": 0, 00:18:50.115 "avg_latency_us": 15819.911812573631, 00:18:50.115 "min_latency_us": 837.8181818181819, 00:18:50.115 "max_latency_us": 7076934.749090909 00:18:50.115 } 00:18:50.115 ], 00:18:50.115 "core_count": 1 00:18:50.115 } 00:18:50.115 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80307 00:18:50.116 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:50.116 [2024-11-12 10:39:41.726275] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:18:50.116 [2024-11-12 10:39:41.726383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80307 ] 00:18:50.116 [2024-11-12 10:39:41.868439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.116 [2024-11-12 10:39:41.898269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.116 [2024-11-12 10:39:41.927016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:50.116 Running I/O for 90 seconds... 00:18:50.116 7444.00 IOPS, 29.08 MiB/s [2024-11-12T10:40:38.874Z] 8408.50 IOPS, 32.85 MiB/s [2024-11-12T10:40:38.874Z] 8891.00 IOPS, 34.73 MiB/s [2024-11-12T10:40:38.874Z] 9126.25 IOPS, 35.65 MiB/s [2024-11-12T10:40:38.874Z] 9272.20 IOPS, 36.22 MiB/s [2024-11-12T10:40:38.874Z] 9393.50 IOPS, 36.69 MiB/s [2024-11-12T10:40:38.874Z] 9452.71 IOPS, 36.92 MiB/s [2024-11-12T10:40:38.874Z] 9483.12 IOPS, 37.04 MiB/s [2024-11-12T10:40:38.874Z] [2024-11-12 10:39:51.287927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.287990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.116 [2024-11-12 10:39:51.288623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.116 [2024-11-12 10:39:51.288657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.116 [2024-11-12 10:39:51.288690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.116 [2024-11-12 10:39:51.288723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.116 [2024-11-12 10:39:51.288756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.116 [2024-11-12 10:39:51.288796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.116 [2024-11-12 10:39:51.288847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.116 [2024-11-12 10:39:51.288883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.288977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.288998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.289013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.289034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.289048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.289069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.289084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.289104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.289119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.289139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.289154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:50.116 [2024-11-12 10:39:51.289175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.116 [2024-11-12 10:39:51.289189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.289237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.289274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.289319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.289354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.289389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.289424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.289459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.289495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.289981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.289995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.290016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.290045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.290066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.117 [2024-11-12 10:39:51.290080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.290104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.290121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.290141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.290156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.290183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.290209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.290230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.290244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.290265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.290279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.290299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.290313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.290333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.290347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.290367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.290382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.290402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.290416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:50.117 [2024-11-12 10:39:51.290436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.117 [2024-11-12 10:39:51.290450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.290484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.290518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.290552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.290588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.290629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.290664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.290699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.290733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.290767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.290801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.290836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.290870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.290904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.290938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.290973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.290998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.291013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.291053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.291089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.291151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.291188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.291238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.118 [2024-11-12 10:39:51.291276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.118 [2024-11-12 10:39:51.291775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.118 [2024-11-12 10:39:51.291789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.291809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.291823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.291843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.291857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.291877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.291891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.291911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.119 [2024-11-12 10:39:51.291925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.291945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.119 [2024-11-12 10:39:51.291959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.291979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.119 [2024-11-12 10:39:51.291993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.292020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.119 [2024-11-12 10:39:51.292035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.292055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.119 [2024-11-12 10:39:51.292069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.292094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.119 [2024-11-12 10:39:51.292109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.292129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.119 [2024-11-12 10:39:51.292144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.293504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.119 [2024-11-12 10:39:51.293535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.293563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.293580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.293603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.293618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.293638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.293652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.293672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.293686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.293707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.293721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.293741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.293756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.293776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.293790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.293931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.293965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.293991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.294007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.294027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.294042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.294062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.294076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.294096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.294111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.294131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.294145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.294168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.294196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.294218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.294233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:51.294257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:51.294273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.119 9477.56 IOPS, 37.02 MiB/s [2024-11-12T10:40:38.877Z] 9509.80 IOPS, 37.15 MiB/s [2024-11-12T10:40:38.877Z] 9523.09 IOPS, 37.20 MiB/s [2024-11-12T10:40:38.877Z] 9561.50 IOPS, 37.35 MiB/s [2024-11-12T10:40:38.877Z] 9581.08 IOPS, 37.43 MiB/s [2024-11-12T10:40:38.877Z] 9598.43 IOPS, 37.49 MiB/s [2024-11-12T10:40:38.877Z] [2024-11-12 10:39:57.873130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:57.873228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:57.873286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:57.873307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:57.873329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:57.873343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:57.873383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:57.873399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:57.873419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:57.873433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:57.873452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:57.873466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:57.873486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:57.873499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:50.119 [2024-11-12 10:39:57.873519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.119 [2024-11-12 10:39:57.873533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.873566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.873613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.873644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.873677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.873709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.873740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.873773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.873831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.120 [2024-11-12 10:39:57.873867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.120 [2024-11-12 10:39:57.873902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.120 [2024-11-12 10:39:57.873936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.120 [2024-11-12 10:39:57.873970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.873990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.120 [2024-11-12 10:39:57.874004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.120 [2024-11-12 10:39:57.874037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.120 [2024-11-12 10:39:57.874071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.120 [2024-11-12 10:39:57.874105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.874145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.874180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.874227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.874282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.874317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.874350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.874383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.874415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.874448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.874481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.874514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:50.120 [2024-11-12 10:39:57.874533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.120 [2024-11-12 10:39:57.874546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.874579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.874612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.874644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.874683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.874717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.874750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.874783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.874816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.874849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.874882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.874915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.874948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.874967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.874981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.875014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.875048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.875081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.875155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.875191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.875240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.121 [2024-11-12 10:39:57.875276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:50.121 [2024-11-12 10:39:57.875845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.121 [2024-11-12 10:39:57.875859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.875878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.875891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.875910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.875924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.875943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.875956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.875976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.875989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.876489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.876976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.876990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.877016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.877031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.877051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.877064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.877084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.122 [2024-11-12 10:39:57.877097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.877117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.877131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.877151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.877165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.877184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.122 [2024-11-12 10:39:57.877198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:50.122 [2024-11-12 10:39:57.877232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.123 [2024-11-12 10:39:57.877248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.877268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.123 [2024-11-12 10:39:57.877281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.877301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.123 [2024-11-12 10:39:57.877315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.877335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.123 [2024-11-12 10:39:57.877348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.877966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.123 [2024-11-12 10:39:57.877993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:39:57.878691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:39:57.878705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:50.123 9500.40 IOPS, 37.11 MiB/s [2024-11-12T10:40:38.881Z] 8997.25 IOPS, 35.15 MiB/s [2024-11-12T10:40:38.881Z] 9029.88 IOPS, 35.27 MiB/s [2024-11-12T10:40:38.881Z] 9061.56 IOPS, 35.40 MiB/s [2024-11-12T10:40:38.881Z] 9098.32 IOPS, 35.54 MiB/s [2024-11-12T10:40:38.881Z] 9127.70 IOPS, 35.66 MiB/s [2024-11-12T10:40:38.881Z] 9154.10 IOPS, 35.76 MiB/s [2024-11-12T10:40:38.881Z] [2024-11-12 10:40:05.018785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:40:05.018850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:40:05.018899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:40:05.018915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:40:05.018935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:40:05.018949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:40:05.018968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:40:05.018981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:40:05.019000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:40:05.019014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:40:05.019032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:40:05.019045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:40:05.019065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:40:05.019078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:40:05.019097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.123 [2024-11-12 10:40:05.019136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:40:05.019173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.123 [2024-11-12 10:40:05.019187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:40:05.019246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.123 [2024-11-12 10:40:05.019263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:40:05.019283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.123 [2024-11-12 10:40:05.019297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:40:05.019319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.123 [2024-11-12 10:40:05.019333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:50.123 [2024-11-12 10:40:05.019353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.123 [2024-11-12 10:40:05.019384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.019419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.019454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.019489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.019613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.019649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.019684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.019735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.019768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.019814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.019848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.019883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.019938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.019973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.019994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.020007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.020041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.020075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.020109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.020143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.124 [2024-11-12 10:40:05.020176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:50.124 [2024-11-12 10:40:05.020706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.124 [2024-11-12 10:40:05.020721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.020741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.020755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.020775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.020788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.020808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.020822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.020842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.020856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.020876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.020890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.020910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.020924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.020943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.020957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.020977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.020991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.021025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.021512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.021545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.021588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.021622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.021656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.021690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.021724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.125 [2024-11-12 10:40:05.021758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.125 [2024-11-12 10:40:05.021791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:50.125 [2024-11-12 10:40:05.021811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.021825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.021846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.021859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.021879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.021892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.021912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.021926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.021946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.021960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.021986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.126 [2024-11-12 10:40:05.022468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.022978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.126 [2024-11-12 10:40:05.022992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:50.126 [2024-11-12 10:40:05.024240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.127 [2024-11-12 10:40:05.024270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.024967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.024982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.025019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.025078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.025113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.025148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.025182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.127 [2024-11-12 10:40:05.025523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.127 [2024-11-12 10:40:05.025562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.127 [2024-11-12 10:40:05.025597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.127 [2024-11-12 10:40:05.025631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.127 [2024-11-12 10:40:05.025664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.127 [2024-11-12 10:40:05.025720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.127 [2024-11-12 10:40:05.025755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.127 [2024-11-12 10:40:05.025790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.127 [2024-11-12 10:40:05.025835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:50.127 [2024-11-12 10:40:05.025857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.025871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.025894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.025909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.025929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.025943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.025964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.025993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.026013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.026027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.026047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.026061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.026081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.026094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.026117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.026132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.026152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.026166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.026186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.026199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.026219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.026249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.026270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.026292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.026316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.026331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.026350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.026364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.026384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.026398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.128 [2024-11-12 10:40:05.028365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.028967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.028981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.029002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.128 [2024-11-12 10:40:05.029016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.128 [2024-11-12 10:40:05.029053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.029308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.029358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.029393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.029427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.029465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.029500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.029534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.029568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.029609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.029646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.029713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.029748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.029980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.029994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.030014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.129 [2024-11-12 10:40:05.030029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.039676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.039770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.039807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.039828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.039858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.039877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.039911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.039930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.039959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.039978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.040007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.040027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.040056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.040075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.040104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.040123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.040152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.040171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:50.129 [2024-11-12 10:40:05.040232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.129 [2024-11-12 10:40:05.040253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.130 [2024-11-12 10:40:05.040307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.130 [2024-11-12 10:40:05.040355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.130 [2024-11-12 10:40:05.040403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.130 [2024-11-12 10:40:05.040463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.130 [2024-11-12 10:40:05.040520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.130 [2024-11-12 10:40:05.040585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.130 [2024-11-12 10:40:05.040634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.130 [2024-11-12 10:40:05.040682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.130 [2024-11-12 10:40:05.040731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.130 [2024-11-12 10:40:05.040790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.040839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.040888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.040936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.040965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.040985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.041033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.041092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.041141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.041228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.041279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.041328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.041376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.041426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.041474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.041532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.041596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.130 [2024-11-12 10:40:05.041644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.130 [2024-11-12 10:40:05.041693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.130 [2024-11-12 10:40:05.041751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.130 [2024-11-12 10:40:05.041781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.041801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.041830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.041850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.041879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.041899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.041927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.041947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.041975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.041994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.042882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.131 [2024-11-12 10:40:05.042940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.042968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.131 [2024-11-12 10:40:05.042988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.043025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.131 [2024-11-12 10:40:05.043045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.043075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.131 [2024-11-12 10:40:05.043094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.043150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.131 [2024-11-12 10:40:05.043171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.043216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.131 [2024-11-12 10:40:05.043237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.043266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.131 [2024-11-12 10:40:05.043286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.043315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.131 [2024-11-12 10:40:05.043334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.043363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.043383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.043412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.043431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.043460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.043479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.043508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.043535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.131 [2024-11-12 10:40:05.043572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.131 [2024-11-12 10:40:05.043591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.043620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.132 [2024-11-12 10:40:05.043639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.043677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.132 [2024-11-12 10:40:05.043698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.043735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.132 [2024-11-12 10:40:05.043754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.043783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.132 [2024-11-12 10:40:05.043802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.043830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.132 [2024-11-12 10:40:05.043849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.043878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.132 [2024-11-12 10:40:05.043898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.043927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.132 [2024-11-12 10:40:05.043946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.043975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.132 [2024-11-12 10:40:05.043994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.044023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.132 [2024-11-12 10:40:05.044042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:50.132 9164.91 IOPS, 35.80 MiB/s [2024-11-12T10:40:38.890Z] [2024-11-12 10:40:05.046362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.132 [2024-11-12 10:40:05.046403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.046441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.132 [2024-11-12 10:40:05.046463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.046492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.046521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.046551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.046588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.046636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.046658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.046687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.046707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.046736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.046766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.046794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.046814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.046851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.046871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.046900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.046919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.046948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.046967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.046996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.047016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.047044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.047064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.047093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.047136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.047168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.047205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.047236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.047256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.047284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.047315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.047346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.047366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.047394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.047414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.047443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.047463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.047492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.132 [2024-11-12 10:40:05.047519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:50.132 [2024-11-12 10:40:05.047548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.047568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.047597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.047617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.047645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.047665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.047694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.047721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.047750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.047769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.047806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.047826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.047854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.047874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.047903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.047930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.047960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.047980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.048029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.048085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.048134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.048209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.048272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.048321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.048369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.048417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.048465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.048513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.048567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.048647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.048695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.048744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.048792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.133 [2024-11-12 10:40:05.048840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.048888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.048936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.048964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.048984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.049012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.049032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.049060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.049080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.049109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.049128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.049157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.049201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.049255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.049277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.049306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.049326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.049354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.049374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:50.133 [2024-11-12 10:40:05.049403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.133 [2024-11-12 10:40:05.049422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.049450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.049470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.049499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.049518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.049546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.049574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.049620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.049639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.049668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.049688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.049717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.049737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.049784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.049815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.049845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.049865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.049894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.049930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.049960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.049981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.134 [2024-11-12 10:40:05.050778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.050827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.050875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.050923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.050971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.050999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.051019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.051047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.051067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.051096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.051141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.051171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.051208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.051249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.051270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.051299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.134 [2024-11-12 10:40:05.051319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:50.134 [2024-11-12 10:40:05.051347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.051938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.135 [2024-11-12 10:40:05.051972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.051993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.135 [2024-11-12 10:40:05.052007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.135 [2024-11-12 10:40:05.052041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.135 [2024-11-12 10:40:05.052074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.135 [2024-11-12 10:40:05.052108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.135 [2024-11-12 10:40:05.052142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.135 [2024-11-12 10:40:05.052176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.135 [2024-11-12 10:40:05.052245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.052304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.052344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.052383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.052421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.052459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.052503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.052557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.052607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.052641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.052674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.135 [2024-11-12 10:40:05.052708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:50.135 [2024-11-12 10:40:05.052728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.136 [2024-11-12 10:40:05.052742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.052762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.136 [2024-11-12 10:40:05.052782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.054697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.136 [2024-11-12 10:40:05.054726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.054767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.136 [2024-11-12 10:40:05.054788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.054810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.136 [2024-11-12 10:40:05.054825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.054845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.054859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.054879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.054894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.054915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.054929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.054949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.054963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.054983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.054997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.136 [2024-11-12 10:40:05.055845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.136 [2024-11-12 10:40:05.055879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.136 [2024-11-12 10:40:05.055913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.136 [2024-11-12 10:40:05.055947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.055968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.136 [2024-11-12 10:40:05.055982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:50.136 [2024-11-12 10:40:05.056020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.137 [2024-11-12 10:40:05.056393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.137 [2024-11-12 10:40:05.056432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.137 [2024-11-12 10:40:05.056471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.137 [2024-11-12 10:40:05.056509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.137 [2024-11-12 10:40:05.056592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.137 [2024-11-12 10:40:05.056626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.137 [2024-11-12 10:40:05.056660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.137 [2024-11-12 10:40:05.056694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.056962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.056976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.057009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.057027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.057048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.057062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.057082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.057097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.057116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.057130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.057150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.057164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.057185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.057231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.057283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.057301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.057324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.057341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.057363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.057379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.057402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.057418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.057441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.137 [2024-11-12 10:40:05.057457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:50.137 [2024-11-12 10:40:05.057479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.057495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.057518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.057534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.057601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.057630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.057650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.057664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.057684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.057698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.057718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.057732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.057752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.057766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.057793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.057808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.057828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.057843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.057863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.057877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.057897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.057911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.057931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.057945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.057965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.057979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.058014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.058048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.058081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.138 [2024-11-12 10:40:05.058116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:50.138 [2024-11-12 10:40:05.058936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.138 [2024-11-12 10:40:05.058950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.058970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.058984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.139 [2024-11-12 10:40:05.059149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.139 [2024-11-12 10:40:05.059206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.139 [2024-11-12 10:40:05.059259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.139 [2024-11-12 10:40:05.059301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.139 [2024-11-12 10:40:05.059340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.139 [2024-11-12 10:40:05.059378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.139 [2024-11-12 10:40:05.059417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.139 [2024-11-12 10:40:05.059455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.059943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.059957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:05.060361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:05.060389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.139 8766.43 IOPS, 34.24 MiB/s [2024-11-12T10:40:38.897Z] 8401.17 IOPS, 32.82 MiB/s [2024-11-12T10:40:38.897Z] 8065.12 IOPS, 31.50 MiB/s [2024-11-12T10:40:38.897Z] 7754.92 IOPS, 30.29 MiB/s [2024-11-12T10:40:38.897Z] 7467.70 IOPS, 29.17 MiB/s [2024-11-12T10:40:38.897Z] 7201.00 IOPS, 28.13 MiB/s [2024-11-12T10:40:38.897Z] 6952.69 IOPS, 27.16 MiB/s [2024-11-12T10:40:38.897Z] 7017.27 IOPS, 27.41 MiB/s [2024-11-12T10:40:38.897Z] 7100.84 IOPS, 27.74 MiB/s [2024-11-12T10:40:38.897Z] 7167.94 IOPS, 28.00 MiB/s [2024-11-12T10:40:38.897Z] 7230.36 IOPS, 28.24 MiB/s [2024-11-12T10:40:38.897Z] 7269.59 IOPS, 28.40 MiB/s [2024-11-12T10:40:38.897Z] 7308.74 IOPS, 28.55 MiB/s [2024-11-12T10:40:38.897Z] [2024-11-12 10:40:18.438210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:18.438277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:18.438353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:18.438375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:18.438400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:18.438416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:18.438438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:18.438453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:18.438476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:18.438491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:18.438538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:18.438555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:18.438578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:18.438593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:18.438630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.139 [2024-11-12 10:40:18.438645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:50.139 [2024-11-12 10:40:18.438709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.438738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.438757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.438771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.438789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.438802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.438822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.438835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.438854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.438867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.438886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.438899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.438918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.438931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.438951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.438964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.439031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.439070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.439097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.439156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.439187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.140 [2024-11-12 10:40:18.439239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.140 [2024-11-12 10:40:18.439794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.140 [2024-11-12 10:40:18.439806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.439820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.439833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.439846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.439858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.439872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.439885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.439899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.439917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.439931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.439944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.439959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.439972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.439986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.439998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.141 [2024-11-12 10:40:18.440521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.440551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.440582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.440627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.440684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.440711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.440743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.440771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.440798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.440825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.440852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.440878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.141 [2024-11-12 10:40:18.440892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.141 [2024-11-12 10:40:18.440905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.440920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-11-12 10:40:18.440948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.440963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-11-12 10:40:18.440976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.440991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-11-12 10:40:18.441004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-11-12 10:40:18.441031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-11-12 10:40:18.441058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-11-12 10:40:18.441087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-11-12 10:40:18.441120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-11-12 10:40:18.441148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.142 [2024-11-12 10:40:18.441867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-11-12 10:40:18.441893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.142 [2024-11-12 10:40:18.441922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.142 [2024-11-12 10:40:18.441937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-11-12 10:40:18.441950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.441969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-11-12 10:40:18.441982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.441997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-11-12 10:40:18.442010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-11-12 10:40:18.442037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.143 [2024-11-12 10:40:18.442063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2506290 is same with the state(6) to be set 00:18:50.143 [2024-11-12 10:40:18.442093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108264 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108784 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108792 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108800 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108808 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108816 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108824 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108832 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108840 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108848 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108856 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108864 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108872 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108880 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108888 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108896 len:8 PRP1 0x0 PRP2 0x0 00:18:50.143 [2024-11-12 10:40:18.442927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.143 [2024-11-12 10:40:18.442940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.143 [2024-11-12 10:40:18.442949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.143 [2024-11-12 10:40:18.442958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108904 len:8 PRP1 0x0 PRP2 0x0 00:18:50.144 [2024-11-12 10:40:18.442970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.144 [2024-11-12 10:40:18.442982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.144 [2024-11-12 10:40:18.442991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.144 [2024-11-12 10:40:18.443000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108912 len:8 PRP1 0x0 PRP2 0x0 00:18:50.144 [2024-11-12 10:40:18.443012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.144 [2024-11-12 10:40:18.443024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.144 [2024-11-12 10:40:18.443034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.144 [2024-11-12 10:40:18.443043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108920 len:8 PRP1 0x0 PRP2 0x0 00:18:50.144 [2024-11-12 10:40:18.443054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.144 [2024-11-12 10:40:18.444332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:50.144 [2024-11-12 10:40:18.444418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.144 [2024-11-12 10:40:18.444444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.144 [2024-11-12 10:40:18.444477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472e50 (9): Bad file descriptor 00:18:50.144 [2024-11-12 10:40:18.444906] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:50.144 [2024-11-12 10:40:18.444964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472e50 with addr=10.0.0.3, port=4421 00:18:50.144 [2024-11-12 10:40:18.444996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472e50 is same with the state(6) to be set 00:18:50.144 [2024-11-12 10:40:18.445061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472e50 (9): Bad file descriptor 00:18:50.144 [2024-11-12 10:40:18.445106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:50.144 [2024-11-12 10:40:18.445124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:50.144 [2024-11-12 10:40:18.445137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:50.144 [2024-11-12 10:40:18.445150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:50.144 [2024-11-12 10:40:18.445163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:50.144 7345.08 IOPS, 28.69 MiB/s [2024-11-12T10:40:38.902Z] 7391.11 IOPS, 28.87 MiB/s [2024-11-12T10:40:38.902Z] 7445.50 IOPS, 29.08 MiB/s [2024-11-12T10:40:38.902Z] 7498.44 IOPS, 29.29 MiB/s [2024-11-12T10:40:38.902Z] 7542.68 IOPS, 29.46 MiB/s [2024-11-12T10:40:38.902Z] 7587.54 IOPS, 29.64 MiB/s [2024-11-12T10:40:38.902Z] 7632.74 IOPS, 29.82 MiB/s [2024-11-12T10:40:38.902Z] 7672.53 IOPS, 29.97 MiB/s [2024-11-12T10:40:38.902Z] 7711.43 IOPS, 30.12 MiB/s [2024-11-12T10:40:38.902Z] 7750.56 IOPS, 30.28 MiB/s [2024-11-12T10:40:38.902Z] [2024-11-12 10:40:28.509452] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:50.144 7785.80 IOPS, 30.41 MiB/s [2024-11-12T10:40:38.902Z] 7821.68 IOPS, 30.55 MiB/s [2024-11-12T10:40:38.902Z] 7858.06 IOPS, 30.70 MiB/s [2024-11-12T10:40:38.902Z] 7892.47 IOPS, 30.83 MiB/s [2024-11-12T10:40:38.902Z] 7916.10 IOPS, 30.92 MiB/s [2024-11-12T10:40:38.902Z] 7943.59 IOPS, 31.03 MiB/s [2024-11-12T10:40:38.902Z] 7975.21 IOPS, 31.15 MiB/s [2024-11-12T10:40:38.902Z] 8002.47 IOPS, 31.26 MiB/s [2024-11-12T10:40:38.902Z] 8032.06 IOPS, 31.38 MiB/s [2024-11-12T10:40:38.902Z] 8062.45 IOPS, 31.49 MiB/s [2024-11-12T10:40:38.902Z] Received shutdown signal, test time was about 55.603876 seconds 00:18:50.144 00:18:50.144 Latency(us) 00:18:50.144 [2024-11-12T10:40:38.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.144 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:50.144 Verification LBA range: start 0x0 length 0x4000 00:18:50.144 Nvme0n1 : 55.60 8078.35 31.56 0.00 0.00 15819.91 837.82 7076934.75 00:18:50.144 [2024-11-12T10:40:38.902Z] =================================================================================================================== 00:18:50.144 [2024-11-12T10:40:38.902Z] Total : 8078.35 31.56 0.00 0.00 15819.91 837.82 7076934.75 00:18:50.144 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.404 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:50.404 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:50.404 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:50.404 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:50.404 10:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:50.404 rmmod nvme_tcp 00:18:50.404 rmmod nvme_fabrics 00:18:50.404 rmmod nvme_keyring 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80263 ']' 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80263 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 80263 ']' 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 80263 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80263 00:18:50.404 killing process with pid 80263 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80263' 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 80263 00:18:50.404 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 80263 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:50.663 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:18:50.922 00:18:50.922 real 1m0.522s 00:18:50.922 user 2m47.940s 00:18:50.922 sys 0m17.912s 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:50.922 ************************************ 00:18:50.922 END TEST nvmf_host_multipath 00:18:50.922 ************************************ 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.922 ************************************ 00:18:50.922 START TEST nvmf_timeout 00:18:50.922 ************************************ 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:50.922 * Looking for test storage... 00:18:50.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:50.922 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:51.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.182 --rc genhtml_branch_coverage=1 00:18:51.182 --rc genhtml_function_coverage=1 00:18:51.182 --rc genhtml_legend=1 00:18:51.182 --rc geninfo_all_blocks=1 00:18:51.182 --rc geninfo_unexecuted_blocks=1 00:18:51.182 00:18:51.182 ' 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:51.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.182 --rc genhtml_branch_coverage=1 00:18:51.182 --rc genhtml_function_coverage=1 00:18:51.182 --rc genhtml_legend=1 00:18:51.182 --rc geninfo_all_blocks=1 00:18:51.182 --rc geninfo_unexecuted_blocks=1 00:18:51.182 00:18:51.182 ' 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:51.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.182 --rc genhtml_branch_coverage=1 00:18:51.182 --rc genhtml_function_coverage=1 00:18:51.182 --rc genhtml_legend=1 00:18:51.182 --rc geninfo_all_blocks=1 00:18:51.182 --rc geninfo_unexecuted_blocks=1 00:18:51.182 00:18:51.182 ' 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:51.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.182 --rc genhtml_branch_coverage=1 00:18:51.182 --rc genhtml_function_coverage=1 00:18:51.182 --rc genhtml_legend=1 00:18:51.182 --rc geninfo_all_blocks=1 00:18:51.182 --rc geninfo_unexecuted_blocks=1 00:18:51.182 00:18:51.182 ' 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.182 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:51.183 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:51.183 Cannot find device "nvmf_init_br" 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:51.183 Cannot find device "nvmf_init_br2" 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:51.183 Cannot find device "nvmf_tgt_br" 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:51.183 Cannot find device "nvmf_tgt_br2" 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:51.183 Cannot find device "nvmf_init_br" 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:51.183 Cannot find device "nvmf_init_br2" 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:51.183 Cannot find device "nvmf_tgt_br" 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:51.183 Cannot find device "nvmf_tgt_br2" 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:51.183 Cannot find device "nvmf_br" 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:51.183 Cannot find device "nvmf_init_if" 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:51.183 Cannot find device "nvmf_init_if2" 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:51.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:51.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:51.183 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:51.442 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:51.442 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:51.442 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:51.442 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:51.442 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:51.442 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:51.442 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:51.442 10:40:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:51.442 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:51.442 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:51.442 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:51.442 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:51.442 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:51.442 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:51.442 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:51.442 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:51.442 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:51.442 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:51.442 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:51.443 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:51.443 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:18:51.443 00:18:51.443 --- 10.0.0.3 ping statistics --- 00:18:51.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.443 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:51.443 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:51.443 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:18:51.443 00:18:51.443 --- 10.0.0.4 ping statistics --- 00:18:51.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.443 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:51.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:51.443 00:18:51.443 --- 10.0.0.1 ping statistics --- 00:18:51.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.443 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:51.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:18:51.443 00:18:51.443 --- 10.0.0.2 ping statistics --- 00:18:51.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.443 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81472 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81472 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81472 ']' 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:51.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:51.443 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:51.702 [2024-11-12 10:40:40.234508] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:18:51.702 [2024-11-12 10:40:40.234608] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.702 [2024-11-12 10:40:40.382812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:51.702 [2024-11-12 10:40:40.411105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.702 [2024-11-12 10:40:40.411507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.702 [2024-11-12 10:40:40.411664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.702 [2024-11-12 10:40:40.411784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.702 [2024-11-12 10:40:40.411816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.702 [2024-11-12 10:40:40.412673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.702 [2024-11-12 10:40:40.412682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.702 [2024-11-12 10:40:40.440105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:51.961 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:51.961 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:18:51.961 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.961 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:51.961 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:51.961 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.961 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:51.961 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:52.220 [2024-11-12 10:40:40.818744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.220 10:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:52.480 Malloc0 00:18:52.480 10:40:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:52.739 10:40:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:52.998 10:40:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:53.258 [2024-11-12 10:40:41.894509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:53.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.258 10:40:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:53.258 10:40:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81509 00:18:53.258 10:40:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81509 /var/tmp/bdevperf.sock 00:18:53.258 10:40:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81509 ']' 00:18:53.258 10:40:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.258 10:40:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:53.258 10:40:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.258 10:40:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:53.258 10:40:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:53.258 [2024-11-12 10:40:41.955542] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:18:53.258 [2024-11-12 10:40:41.955644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81509 ] 00:18:53.517 [2024-11-12 10:40:42.105716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.517 [2024-11-12 10:40:42.144986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.517 [2024-11-12 10:40:42.179267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:54.454 10:40:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:54.454 10:40:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:18:54.454 10:40:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:54.454 10:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:54.713 NVMe0n1 00:18:54.973 10:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81533 00:18:54.973 10:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:54.973 10:40:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:54.973 Running I/O for 10 seconds... 00:18:55.910 10:40:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:56.171 7701.00 IOPS, 30.08 MiB/s [2024-11-12T10:40:44.929Z] [2024-11-12 10:40:44.702693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.702775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.702797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.702808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.702819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.702828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.702839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.702847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.702859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.702868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.702879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.702887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.702908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.702932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.702944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.702953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.702964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.702974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.702985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.702994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.703005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.703014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.703026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.703035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.703047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.703056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.703067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.703076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.703087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.171 [2024-11-12 10:40:44.703482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.703500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.171 [2024-11-12 10:40:44.703509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.703521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.171 [2024-11-12 10:40:44.703530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.703543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.171 [2024-11-12 10:40:44.703552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.703564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.171 [2024-11-12 10:40:44.703573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.171 [2024-11-12 10:40:44.703584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.703685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.703704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.703714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.703725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.703735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.703746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.703755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.703767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.703776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.703788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.703797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.703909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.703925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.703938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.703947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.703959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.703969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.703980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.703989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.172 [2024-11-12 10:40:44.704137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.172 [2024-11-12 10:40:44.704158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.172 [2024-11-12 10:40:44.704848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.704970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.704978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.705103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.705115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.705126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.705135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.705146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.705155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.705166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.705174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.705314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.705326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.705338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.705348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.705359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.705368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.705380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.705389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.705400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.705409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.705420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.705429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.705554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.172 [2024-11-12 10:40:44.705569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.172 [2024-11-12 10:40:44.705580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.705589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.705601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.705625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.705727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.705742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.705753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.705762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.705773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.705783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.705794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.705802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.705813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.705822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.705926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.705941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.705953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.705962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.705973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.705982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.705993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.706923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.706934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.707034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.707047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.707056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.707067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.707076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.707087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.707096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.707241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.707256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.707268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.707278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.707289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.707298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.707310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.707319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.707330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.707444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.173 [2024-11-12 10:40:44.707476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.173 [2024-11-12 10:40:44.707485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.707496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.707505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.707516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.707525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.707632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.707645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.707655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.707665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.707676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.707684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.707695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.707704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.707715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.708983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.708994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.709003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.709104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.709118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.709129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.709138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.709149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.174 [2024-11-12 10:40:44.709158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.709168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaab280 is same with the state(6) to be set 00:18:56.174 [2024-11-12 10:40:44.709205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:56.174 [2024-11-12 10:40:44.709215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:56.174 [2024-11-12 10:40:44.709327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69648 len:8 PRP1 0x0 PRP2 0x0 00:18:56.174 [2024-11-12 10:40:44.709344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.709924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.174 [2024-11-12 10:40:44.709953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.709965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.174 [2024-11-12 10:40:44.709974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.174 [2024-11-12 10:40:44.709984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.175 [2024-11-12 10:40:44.709993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.175 [2024-11-12 10:40:44.710014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.175 [2024-11-12 10:40:44.710023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.175 [2024-11-12 10:40:44.710031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3de50 is same with the state(6) to be set 00:18:56.175 [2024-11-12 10:40:44.710534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:56.175 [2024-11-12 10:40:44.710641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3de50 (9): Bad file descriptor 00:18:56.175 [2024-11-12 10:40:44.710754] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.175 [2024-11-12 10:40:44.710776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3de50 with addr=10.0.0.3, port=4420 00:18:56.175 [2024-11-12 10:40:44.710786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3de50 is same with the state(6) to be set 00:18:56.175 [2024-11-12 10:40:44.710805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3de50 (9): Bad file descriptor 00:18:56.175 [2024-11-12 10:40:44.710820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:56.175 [2024-11-12 10:40:44.710828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:56.175 [2024-11-12 10:40:44.710966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:56.175 [2024-11-12 10:40:44.711297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:56.175 [2024-11-12 10:40:44.711317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:56.175 10:40:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:58.047 4298.50 IOPS, 16.79 MiB/s [2024-11-12T10:40:46.805Z] 2865.67 IOPS, 11.19 MiB/s [2024-11-12T10:40:46.805Z] [2024-11-12 10:40:46.711427] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:58.047 [2024-11-12 10:40:46.711542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3de50 with addr=10.0.0.3, port=4420 00:18:58.047 [2024-11-12 10:40:46.711556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3de50 is same with the state(6) to be set 00:18:58.047 [2024-11-12 10:40:46.711580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3de50 (9): Bad file descriptor 00:18:58.047 [2024-11-12 10:40:46.711598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:58.047 [2024-11-12 10:40:46.711606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:58.047 [2024-11-12 10:40:46.711616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:58.047 [2024-11-12 10:40:46.711626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:58.047 [2024-11-12 10:40:46.711636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:58.047 10:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:58.047 10:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:58.047 10:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:58.306 10:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:58.306 10:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:58.306 10:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:58.306 10:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:58.565 10:40:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:58.565 10:40:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:00.070 2149.25 IOPS, 8.40 MiB/s [2024-11-12T10:40:48.828Z] 1719.40 IOPS, 6.72 MiB/s [2024-11-12T10:40:48.828Z] [2024-11-12 10:40:48.711742] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.070 [2024-11-12 10:40:48.711811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3de50 with addr=10.0.0.3, port=4420 00:19:00.070 [2024-11-12 10:40:48.711825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3de50 is same with the state(6) to be set 00:19:00.070 [2024-11-12 10:40:48.711847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3de50 (9): Bad file descriptor 00:19:00.070 [2024-11-12 10:40:48.711865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:00.070 [2024-11-12 10:40:48.711874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:00.070 [2024-11-12 10:40:48.711883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:00.070 [2024-11-12 10:40:48.711893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:00.070 [2024-11-12 10:40:48.711903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:02.040 1432.83 IOPS, 5.60 MiB/s [2024-11-12T10:40:50.798Z] 1228.14 IOPS, 4.80 MiB/s [2024-11-12T10:40:50.798Z] [2024-11-12 10:40:50.711931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:02.040 [2024-11-12 10:40:50.711996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:02.040 [2024-11-12 10:40:50.712007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:02.040 [2024-11-12 10:40:50.712016] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:02.040 [2024-11-12 10:40:50.712027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:02.976 1074.62 IOPS, 4.20 MiB/s 00:19:02.976 Latency(us) 00:19:02.976 [2024-11-12T10:40:51.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.976 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:02.976 Verification LBA range: start 0x0 length 0x4000 00:19:02.976 NVMe0n1 : 8.13 1057.31 4.13 15.74 0.00 119152.97 3544.90 7046430.72 00:19:02.976 [2024-11-12T10:40:51.734Z] =================================================================================================================== 00:19:02.976 [2024-11-12T10:40:51.734Z] Total : 1057.31 4.13 15.74 0.00 119152.97 3544.90 7046430.72 00:19:02.976 { 00:19:02.976 "results": [ 00:19:02.976 { 00:19:02.976 "job": "NVMe0n1", 00:19:02.976 "core_mask": "0x4", 00:19:02.976 "workload": "verify", 00:19:02.976 "status": "finished", 00:19:02.976 "verify_range": { 00:19:02.976 "start": 0, 00:19:02.976 "length": 16384 00:19:02.976 }, 00:19:02.976 "queue_depth": 128, 00:19:02.976 "io_size": 4096, 00:19:02.976 "runtime": 8.130993, 00:19:02.976 "iops": 1057.312434040959, 00:19:02.976 "mibps": 4.130126695472496, 00:19:02.976 "io_failed": 128, 00:19:02.976 "io_timeout": 0, 00:19:02.976 "avg_latency_us": 119152.96904568898, 00:19:02.976 "min_latency_us": 3544.9018181818183, 00:19:02.976 "max_latency_us": 7046430.72 00:19:02.976 } 00:19:02.976 ], 00:19:02.976 "core_count": 1 00:19:02.976 } 00:19:03.544 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:03.544 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:03.544 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:03.803 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:03.803 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:03.803 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:03.803 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:04.061 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:04.061 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81533 00:19:04.061 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81509 00:19:04.061 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81509 ']' 00:19:04.061 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81509 00:19:04.061 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:19:04.061 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:04.061 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81509 00:19:04.320 killing process with pid 81509 00:19:04.320 Received shutdown signal, test time was about 9.245766 seconds 00:19:04.320 00:19:04.320 Latency(us) 00:19:04.320 [2024-11-12T10:40:53.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.320 [2024-11-12T10:40:53.078Z] =================================================================================================================== 00:19:04.320 [2024-11-12T10:40:53.078Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.320 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:04.320 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:04.320 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81509' 00:19:04.320 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81509 00:19:04.320 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81509 00:19:04.320 10:40:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:04.579 [2024-11-12 10:40:53.220060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:04.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.579 10:40:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81654 00:19:04.579 10:40:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81654 /var/tmp/bdevperf.sock 00:19:04.579 10:40:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:04.579 10:40:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81654 ']' 00:19:04.579 10:40:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.579 10:40:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:04.579 10:40:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.579 10:40:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:04.579 10:40:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:04.579 [2024-11-12 10:40:53.287958] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:19:04.579 [2024-11-12 10:40:53.288047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81654 ] 00:19:04.838 [2024-11-12 10:40:53.432343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.838 [2024-11-12 10:40:53.462358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.838 [2024-11-12 10:40:53.491878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:05.774 10:40:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:05.774 10:40:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:05.774 10:40:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:05.774 10:40:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:06.033 NVMe0n1 00:19:06.033 10:40:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81679 00:19:06.033 10:40:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:06.033 10:40:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:06.291 Running I/O for 10 seconds... 00:19:07.228 10:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:07.489 7589.00 IOPS, 29.64 MiB/s [2024-11-12T10:40:56.247Z] [2024-11-12 10:40:56.025657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.489 [2024-11-12 10:40:56.025725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.489 [2024-11-12 10:40:56.025746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.489 [2024-11-12 10:40:56.025756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.489 [2024-11-12 10:40:56.025766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.489 [2024-11-12 10:40:56.025775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.489 [2024-11-12 10:40:56.025784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.489 [2024-11-12 10:40:56.025792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.489 [2024-11-12 10:40:56.025802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.489 [2024-11-12 10:40:56.025812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.489 [2024-11-12 10:40:56.025821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.489 [2024-11-12 10:40:56.025830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.489 [2024-11-12 10:40:56.025840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.489 [2024-11-12 10:40:56.025847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.489 [2024-11-12 10:40:56.025857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.489 [2024-11-12 10:40:56.025865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.489 [2024-11-12 10:40:56.025874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.489 [2024-11-12 10:40:56.025882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.489 [2024-11-12 10:40:56.025892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.489 [2024-11-12 10:40:56.025900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.489 [2024-11-12 10:40:56.025909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.489 [2024-11-12 10:40:56.025917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.025927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.025935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.025944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.025953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.025963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.025971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.025980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.025988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.025998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.026967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.026976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.490 [2024-11-12 10:40:56.027872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.490 [2024-11-12 10:40:56.027881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.027891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.491 [2024-11-12 10:40:56.027953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.027964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.027973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.027983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.027992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.028002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.028011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.028021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.028030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.028040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.028049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.028059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.028067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.028170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.028195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.028208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.028217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.028346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.028359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.028658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.028673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.028685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.028820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.028833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.028967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.028982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.491 [2024-11-12 10:40:56.029390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.491 [2024-11-12 10:40:56.029410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:07.491 [2024-11-12 10:40:56.029903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.029971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.029980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.030073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.030085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.030096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.030104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.030115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.030124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.030207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.030219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.030230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.030239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.030250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.030259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.030270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.030279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.030289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.491 [2024-11-12 10:40:56.030298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.491 [2024-11-12 10:40:56.030309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.030847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.030858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.492 [2024-11-12 10:40:56.031774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.492 [2024-11-12 10:40:56.031785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.493 [2024-11-12 10:40:56.031866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.493 [2024-11-12 10:40:56.031882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.493 [2024-11-12 10:40:56.031891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.493 [2024-11-12 10:40:56.031901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.493 [2024-11-12 10:40:56.031911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.493 [2024-11-12 10:40:56.031921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.493 [2024-11-12 10:40:56.031930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.493 [2024-11-12 10:40:56.031940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.493 [2024-11-12 10:40:56.031949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.493 [2024-11-12 10:40:56.032029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.493 [2024-11-12 10:40:56.032041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.493 [2024-11-12 10:40:56.032052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.493 [2024-11-12 10:40:56.032061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.493 [2024-11-12 10:40:56.032071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1415280 is same with the state(6) to be set 00:19:07.493 [2024-11-12 10:40:56.032084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:07.493 [2024-11-12 10:40:56.032092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:07.493 [2024-11-12 10:40:56.032100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70424 len:8 PRP1 0x0 PRP2 0x0 00:19:07.493 [2024-11-12 10:40:56.032109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.493 [2024-11-12 10:40:56.032414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.493 [2024-11-12 10:40:56.032434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.493 [2024-11-12 10:40:56.032444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.493 [2024-11-12 10:40:56.032552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.493 [2024-11-12 10:40:56.032564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.493 [2024-11-12 10:40:56.032574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.493 [2024-11-12 10:40:56.032584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.493 [2024-11-12 10:40:56.032592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.493 [2024-11-12 10:40:56.032601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7e50 is same with the state(6) to be set 00:19:07.493 [2024-11-12 10:40:56.033002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:07.493 [2024-11-12 10:40:56.033038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a7e50 (9): Bad file descriptor 00:19:07.493 [2024-11-12 10:40:56.033135] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.493 [2024-11-12 10:40:56.033156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a7e50 with addr=10.0.0.3, port=4420 00:19:07.493 [2024-11-12 10:40:56.033167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7e50 is same with the state(6) to be set 00:19:07.493 [2024-11-12 10:40:56.033344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a7e50 (9): Bad file descriptor 00:19:07.493 [2024-11-12 10:40:56.033423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:07.493 [2024-11-12 10:40:56.033434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:07.493 [2024-11-12 10:40:56.033444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:07.493 [2024-11-12 10:40:56.033455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:07.493 [2024-11-12 10:40:56.033468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:07.493 10:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:08.429 4363.00 IOPS, 17.04 MiB/s [2024-11-12T10:40:57.187Z] [2024-11-12 10:40:57.033576] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:08.429 [2024-11-12 10:40:57.033642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a7e50 with addr=10.0.0.3, port=4420 00:19:08.429 [2024-11-12 10:40:57.033656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7e50 is same with the state(6) to be set 00:19:08.429 [2024-11-12 10:40:57.033678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a7e50 (9): Bad file descriptor 00:19:08.429 [2024-11-12 10:40:57.033696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:08.429 [2024-11-12 10:40:57.033705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:08.429 [2024-11-12 10:40:57.033714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:08.429 [2024-11-12 10:40:57.033724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:08.429 [2024-11-12 10:40:57.033734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:08.429 10:40:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:08.688 [2024-11-12 10:40:57.305307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:08.688 10:40:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81679 00:19:09.514 2908.67 IOPS, 11.36 MiB/s [2024-11-12T10:40:58.272Z] [2024-11-12 10:40:58.052333] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:11.405 2181.50 IOPS, 8.52 MiB/s [2024-11-12T10:41:01.098Z] 3105.60 IOPS, 12.13 MiB/s [2024-11-12T10:41:02.034Z] 3932.00 IOPS, 15.36 MiB/s [2024-11-12T10:41:02.971Z] 4522.29 IOPS, 17.67 MiB/s [2024-11-12T10:41:03.907Z] 4965.00 IOPS, 19.39 MiB/s [2024-11-12T10:41:05.285Z] 5323.44 IOPS, 20.79 MiB/s [2024-11-12T10:41:05.285Z] 5584.80 IOPS, 21.82 MiB/s 00:19:16.527 Latency(us) 00:19:16.527 [2024-11-12T10:41:05.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.527 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:16.527 Verification LBA range: start 0x0 length 0x4000 00:19:16.527 NVMe0n1 : 10.01 5593.92 21.85 0.00 0.00 22850.71 1876.71 3035150.89 00:19:16.527 [2024-11-12T10:41:05.285Z] =================================================================================================================== 00:19:16.527 [2024-11-12T10:41:05.285Z] Total : 5593.92 21.85 0.00 0.00 22850.71 1876.71 3035150.89 00:19:16.527 { 00:19:16.527 "results": [ 00:19:16.527 { 00:19:16.527 "job": "NVMe0n1", 00:19:16.527 "core_mask": "0x4", 00:19:16.527 "workload": "verify", 00:19:16.527 "status": "finished", 00:19:16.527 "verify_range": { 00:19:16.527 "start": 0, 00:19:16.527 "length": 16384 00:19:16.527 }, 00:19:16.527 "queue_depth": 128, 00:19:16.527 "io_size": 4096, 00:19:16.527 "runtime": 10.006571, 00:19:16.527 "iops": 5593.924232386898, 00:19:16.527 "mibps": 21.851266532761322, 00:19:16.527 "io_failed": 0, 00:19:16.527 "io_timeout": 0, 00:19:16.527 "avg_latency_us": 22850.705550170853, 00:19:16.527 "min_latency_us": 1876.7127272727273, 00:19:16.527 "max_latency_us": 3035150.8945454545 00:19:16.527 } 00:19:16.527 ], 00:19:16.527 "core_count": 1 00:19:16.527 } 00:19:16.527 10:41:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81784 00:19:16.527 10:41:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:16.527 10:41:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:16.527 Running I/O for 10 seconds... 00:19:17.466 10:41:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:17.466 7189.00 IOPS, 28.08 MiB/s [2024-11-12T10:41:06.224Z] [2024-11-12 10:41:06.179301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.466 [2024-11-12 10:41:06.179354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.466 [2024-11-12 10:41:06.179384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.466 [2024-11-12 10:41:06.179393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.466 [2024-11-12 10:41:06.179402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.466 [2024-11-12 10:41:06.179410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.466 [2024-11-12 10:41:06.179419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.466 [2024-11-12 10:41:06.179442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.466 [2024-11-12 10:41:06.179451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7e50 is same with the state(6) to be set 00:19:17.466 [2024-11-12 10:41:06.179979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.467 [2024-11-12 10:41:06.180006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.180899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.180995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.181932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.181942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.467 [2024-11-12 10:41:06.182027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.467 [2024-11-12 10:41:06.182040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.182911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.182993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.183895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.183994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.184010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.184019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.184030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.184039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.184133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.468 [2024-11-12 10:41:06.184148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.468 [2024-11-12 10:41:06.184160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.184986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.184997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.469 [2024-11-12 10:41:06.185568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.469 [2024-11-12 10:41:06.185664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.469 [2024-11-12 10:41:06.185684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.469 [2024-11-12 10:41:06.185704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.469 [2024-11-12 10:41:06.185852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.469 [2024-11-12 10:41:06.185883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.185894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.469 [2024-11-12 10:41:06.185902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.186010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.469 [2024-11-12 10:41:06.186020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.186031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.469 [2024-11-12 10:41:06.186040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.469 [2024-11-12 10:41:06.186122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.470 [2024-11-12 10:41:06.186137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.470 [2024-11-12 10:41:06.186149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.470 [2024-11-12 10:41:06.186157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.470 [2024-11-12 10:41:06.186169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.470 [2024-11-12 10:41:06.186189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.470 [2024-11-12 10:41:06.186274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.470 [2024-11-12 10:41:06.186288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.470 [2024-11-12 10:41:06.186300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.470 [2024-11-12 10:41:06.186310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.470 [2024-11-12 10:41:06.186321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.470 [2024-11-12 10:41:06.186330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.470 [2024-11-12 10:41:06.186341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.470 [2024-11-12 10:41:06.186349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.470 [2024-11-12 10:41:06.186431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.470 [2024-11-12 10:41:06.186446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.470 [2024-11-12 10:41:06.186456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416350 is same with the state(6) to be set 00:19:17.470 [2024-11-12 10:41:06.186469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:17.470 [2024-11-12 10:41:06.186476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:17.470 [2024-11-12 10:41:06.186484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:19:17.470 [2024-11-12 10:41:06.186563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.470 [2024-11-12 10:41:06.187146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:17.470 [2024-11-12 10:41:06.187202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a7e50 (9): Bad file descriptor 00:19:17.470 [2024-11-12 10:41:06.187521] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.470 [2024-11-12 10:41:06.187554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a7e50 with addr=10.0.0.3, port=4420 00:19:17.470 [2024-11-12 10:41:06.187567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7e50 is same with the state(6) to be set 00:19:17.470 [2024-11-12 10:41:06.187587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a7e50 (9): Bad file descriptor 00:19:17.470 [2024-11-12 10:41:06.187604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:17.470 [2024-11-12 10:41:06.187614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:17.470 [2024-11-12 10:41:06.187623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:17.470 [2024-11-12 10:41:06.187761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:17.470 [2024-11-12 10:41:06.187865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:17.470 10:41:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:18.665 4106.00 IOPS, 16.04 MiB/s [2024-11-12T10:41:07.423Z] [2024-11-12 10:41:07.187976] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.665 [2024-11-12 10:41:07.188056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a7e50 with addr=10.0.0.3, port=4420 00:19:18.665 [2024-11-12 10:41:07.188071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7e50 is same with the state(6) to be set 00:19:18.665 [2024-11-12 10:41:07.188092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a7e50 (9): Bad file descriptor 00:19:18.665 [2024-11-12 10:41:07.188110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:18.665 [2024-11-12 10:41:07.188119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:18.665 [2024-11-12 10:41:07.188130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:18.665 [2024-11-12 10:41:07.188140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:18.665 [2024-11-12 10:41:07.188151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:19.601 2737.33 IOPS, 10.69 MiB/s [2024-11-12T10:41:08.359Z] [2024-11-12 10:41:08.188258] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:19.601 [2024-11-12 10:41:08.188337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a7e50 with addr=10.0.0.3, port=4420 00:19:19.601 [2024-11-12 10:41:08.188352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7e50 is same with the state(6) to be set 00:19:19.601 [2024-11-12 10:41:08.188374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a7e50 (9): Bad file descriptor 00:19:19.601 [2024-11-12 10:41:08.188392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:19.601 [2024-11-12 10:41:08.188403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:19.601 [2024-11-12 10:41:08.188413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:19.601 [2024-11-12 10:41:08.188424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:19.601 [2024-11-12 10:41:08.188434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:20.537 2053.00 IOPS, 8.02 MiB/s [2024-11-12T10:41:09.295Z] [2024-11-12 10:41:09.191025] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.537 [2024-11-12 10:41:09.191106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a7e50 with addr=10.0.0.3, port=4420 00:19:20.537 [2024-11-12 10:41:09.191159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7e50 is same with the state(6) to be set 00:19:20.537 [2024-11-12 10:41:09.191657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a7e50 (9): Bad file descriptor 00:19:20.537 [2024-11-12 10:41:09.192094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:20.537 [2024-11-12 10:41:09.192125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:20.537 [2024-11-12 10:41:09.192139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:20.537 [2024-11-12 10:41:09.192151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:20.537 [2024-11-12 10:41:09.192162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:20.537 10:41:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:20.819 [2024-11-12 10:41:09.468406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:20.819 10:41:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81784 00:19:21.644 1642.40 IOPS, 6.42 MiB/s [2024-11-12T10:41:10.402Z] [2024-11-12 10:41:10.217482] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:19:23.517 2736.50 IOPS, 10.69 MiB/s [2024-11-12T10:41:13.213Z] 3826.71 IOPS, 14.95 MiB/s [2024-11-12T10:41:14.150Z] 4630.38 IOPS, 18.09 MiB/s [2024-11-12T10:41:15.086Z] 5251.89 IOPS, 20.52 MiB/s [2024-11-12T10:41:15.086Z] 5763.50 IOPS, 22.51 MiB/s 00:19:26.328 Latency(us) 00:19:26.328 [2024-11-12T10:41:15.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.328 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:26.328 Verification LBA range: start 0x0 length 0x4000 00:19:26.328 NVMe0n1 : 10.01 5771.62 22.55 3849.48 0.00 13272.82 852.71 3019898.88 00:19:26.328 [2024-11-12T10:41:15.086Z] =================================================================================================================== 00:19:26.328 [2024-11-12T10:41:15.086Z] Total : 5771.62 22.55 3849.48 0.00 13272.82 0.00 3019898.88 00:19:26.328 { 00:19:26.328 "results": [ 00:19:26.328 { 00:19:26.328 "job": "NVMe0n1", 00:19:26.328 "core_mask": "0x4", 00:19:26.328 "workload": "verify", 00:19:26.328 "status": "finished", 00:19:26.328 "verify_range": { 00:19:26.328 "start": 0, 00:19:26.328 "length": 16384 00:19:26.328 }, 00:19:26.328 "queue_depth": 128, 00:19:26.328 "io_size": 4096, 00:19:26.328 "runtime": 10.008106, 00:19:26.328 "iops": 5771.621523592976, 00:19:26.328 "mibps": 22.54539657653506, 00:19:26.328 "io_failed": 38526, 00:19:26.328 "io_timeout": 0, 00:19:26.328 "avg_latency_us": 13272.816740966353, 00:19:26.328 "min_latency_us": 852.7127272727273, 00:19:26.328 "max_latency_us": 3019898.88 00:19:26.328 } 00:19:26.328 ], 00:19:26.328 "core_count": 1 00:19:26.328 } 00:19:26.328 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81654 00:19:26.328 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81654 ']' 00:19:26.328 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81654 00:19:26.328 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:19:26.328 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:26.328 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81654 00:19:26.587 killing process with pid 81654 00:19:26.587 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.587 00:19:26.587 Latency(us) 00:19:26.587 [2024-11-12T10:41:15.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.587 [2024-11-12T10:41:15.345Z] =================================================================================================================== 00:19:26.587 [2024-11-12T10:41:15.345Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.587 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:26.587 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:26.587 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81654' 00:19:26.587 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81654 00:19:26.588 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81654 00:19:26.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.588 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81898 00:19:26.588 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:26.588 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81898 /var/tmp/bdevperf.sock 00:19:26.588 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81898 ']' 00:19:26.588 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.588 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:26.588 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.588 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:26.588 10:41:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:26.588 [2024-11-12 10:41:15.293165] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:19:26.588 [2024-11-12 10:41:15.293282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81898 ] 00:19:26.847 [2024-11-12 10:41:15.435379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.847 [2024-11-12 10:41:15.467580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.847 [2024-11-12 10:41:15.498243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:27.781 10:41:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:27.781 10:41:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:27.781 10:41:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81914 00:19:27.781 10:41:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81898 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:27.781 10:41:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:28.039 10:41:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:28.298 NVMe0n1 00:19:28.298 10:41:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:28.298 10:41:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81956 00:19:28.298 10:41:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:28.298 Running I/O for 10 seconds... 00:19:29.233 10:41:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:29.493 16637.00 IOPS, 64.99 MiB/s [2024-11-12T10:41:18.251Z] [2024-11-12 10:41:18.123325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.493 [2024-11-12 10:41:18.123390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.493 [2024-11-12 10:41:18.123402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.493 [2024-11-12 10:41:18.123411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.493 [2024-11-12 10:41:18.123420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.493 [2024-11-12 10:41:18.123429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.493 [2024-11-12 10:41:18.123439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.493 [2024-11-12 10:41:18.123447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.493 [2024-11-12 10:41:18.123456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adee50 is same with the state(6) to be set 00:19:29.493 [2024-11-12 10:41:18.123712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.493 [2024-11-12 10:41:18.123729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.493 [2024-11-12 10:41:18.123748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.493 [2024-11-12 10:41:18.123757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.493 [2024-11-12 10:41:18.123768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.123786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.123804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.123822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.123841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.123860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.123877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.123897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.123915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.123933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.123951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.123970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.123988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.123997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.494 [2024-11-12 10:41:18.124874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.494 [2024-11-12 10:41:18.124884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.124892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.124903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.124911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.124921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.124929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.124939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.124948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.124958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.124966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.124976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.124984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.124994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.495 [2024-11-12 10:41:18.125591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.495 [2024-11-12 10:41:18.125601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.125985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.125994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.126005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.126013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.126023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.126031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.126040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.126048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.126058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.126069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.126079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.126087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.126097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.496 [2024-11-12 10:41:18.126105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.126114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4c140 is same with the state(6) to be set 00:19:29.496 [2024-11-12 10:41:18.126125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:29.496 [2024-11-12 10:41:18.126131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:29.496 [2024-11-12 10:41:18.126139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31648 len:8 PRP1 0x0 PRP2 0x0 00:19:29.496 [2024-11-12 10:41:18.126147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.496 [2024-11-12 10:41:18.126427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:29.496 [2024-11-12 10:41:18.126470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adee50 (9): Bad file descriptor 00:19:29.496 [2024-11-12 10:41:18.126566] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.496 [2024-11-12 10:41:18.126589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adee50 with addr=10.0.0.3, port=4420 00:19:29.496 [2024-11-12 10:41:18.126599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adee50 is same with the state(6) to be set 00:19:29.496 [2024-11-12 10:41:18.126616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adee50 (9): Bad file descriptor 00:19:29.496 [2024-11-12 10:41:18.126631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:29.496 [2024-11-12 10:41:18.126640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:29.496 [2024-11-12 10:41:18.126650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:29.496 [2024-11-12 10:41:18.126659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:29.496 [2024-11-12 10:41:18.126670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:29.496 10:41:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81956 00:19:31.366 9779.00 IOPS, 38.20 MiB/s [2024-11-12T10:41:20.382Z] 6519.33 IOPS, 25.47 MiB/s [2024-11-12T10:41:20.382Z] [2024-11-12 10:41:20.126903] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:31.624 [2024-11-12 10:41:20.126973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adee50 with addr=10.0.0.3, port=4420 00:19:31.624 [2024-11-12 10:41:20.126988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adee50 is same with the state(6) to be set 00:19:31.624 [2024-11-12 10:41:20.127012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adee50 (9): Bad file descriptor 00:19:31.624 [2024-11-12 10:41:20.127030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:31.624 [2024-11-12 10:41:20.127040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:31.624 [2024-11-12 10:41:20.127050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:31.624 [2024-11-12 10:41:20.127060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:31.624 [2024-11-12 10:41:20.127070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:33.495 4889.50 IOPS, 19.10 MiB/s [2024-11-12T10:41:22.253Z] 3911.60 IOPS, 15.28 MiB/s [2024-11-12T10:41:22.253Z] [2024-11-12 10:41:22.127321] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.495 [2024-11-12 10:41:22.127387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adee50 with addr=10.0.0.3, port=4420 00:19:33.495 [2024-11-12 10:41:22.127403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adee50 is same with the state(6) to be set 00:19:33.495 [2024-11-12 10:41:22.127428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adee50 (9): Bad file descriptor 00:19:33.495 [2024-11-12 10:41:22.127461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:33.495 [2024-11-12 10:41:22.127471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:33.495 [2024-11-12 10:41:22.127482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:33.495 [2024-11-12 10:41:22.127493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:33.495 [2024-11-12 10:41:22.127518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:35.367 3259.67 IOPS, 12.73 MiB/s [2024-11-12T10:41:24.384Z] 2794.00 IOPS, 10.91 MiB/s [2024-11-12T10:41:24.384Z] [2024-11-12 10:41:24.127674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:35.626 [2024-11-12 10:41:24.127731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:35.626 [2024-11-12 10:41:24.127743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:35.626 [2024-11-12 10:41:24.127769] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:19:35.626 [2024-11-12 10:41:24.127780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:36.453 2444.75 IOPS, 9.55 MiB/s 00:19:36.453 Latency(us) 00:19:36.453 [2024-11-12T10:41:25.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.453 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:36.453 NVMe0n1 : 8.18 2390.53 9.34 15.65 0.00 53155.71 7149.38 7015926.69 00:19:36.453 [2024-11-12T10:41:25.211Z] =================================================================================================================== 00:19:36.453 [2024-11-12T10:41:25.211Z] Total : 2390.53 9.34 15.65 0.00 53155.71 7149.38 7015926.69 00:19:36.453 { 00:19:36.453 "results": [ 00:19:36.453 { 00:19:36.453 "job": "NVMe0n1", 00:19:36.453 "core_mask": "0x4", 00:19:36.453 "workload": "randread", 00:19:36.453 "status": "finished", 00:19:36.453 "queue_depth": 128, 00:19:36.453 "io_size": 4096, 00:19:36.453 "runtime": 8.181447, 00:19:36.453 "iops": 2390.530672630404, 00:19:36.453 "mibps": 9.338010439962515, 00:19:36.453 "io_failed": 128, 00:19:36.453 "io_timeout": 0, 00:19:36.453 "avg_latency_us": 53155.70693192208, 00:19:36.453 "min_latency_us": 7149.381818181818, 00:19:36.453 "max_latency_us": 7015926.69090909 00:19:36.453 } 00:19:36.453 ], 00:19:36.453 "core_count": 1 00:19:36.453 } 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:36.453 Attaching 5 probes... 00:19:36.453 1359.924030: reset bdev controller NVMe0 00:19:36.453 1360.009403: reconnect bdev controller NVMe0 00:19:36.453 3360.277819: reconnect delay bdev controller NVMe0 00:19:36.453 3360.313497: reconnect bdev controller NVMe0 00:19:36.453 5360.694185: reconnect delay bdev controller NVMe0 00:19:36.453 5360.740006: reconnect bdev controller NVMe0 00:19:36.453 7361.138559: reconnect delay bdev controller NVMe0 00:19:36.453 7361.159745: reconnect bdev controller NVMe0 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81914 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81898 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81898 ']' 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81898 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81898 00:19:36.453 killing process with pid 81898 00:19:36.453 Received shutdown signal, test time was about 8.254258 seconds 00:19:36.453 00:19:36.453 Latency(us) 00:19:36.453 [2024-11-12T10:41:25.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.453 [2024-11-12T10:41:25.211Z] =================================================================================================================== 00:19:36.453 [2024-11-12T10:41:25.211Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81898' 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81898 00:19:36.453 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81898 00:19:36.712 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:36.971 rmmod nvme_tcp 00:19:36.971 rmmod nvme_fabrics 00:19:36.971 rmmod nvme_keyring 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81472 ']' 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81472 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81472 ']' 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81472 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:36.971 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81472 00:19:37.236 killing process with pid 81472 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81472' 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81472 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81472 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:37.236 10:41:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:19:37.517 00:19:37.517 real 0m46.550s 00:19:37.517 user 2m16.516s 00:19:37.517 sys 0m5.795s 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.517 ************************************ 00:19:37.517 END TEST nvmf_timeout 00:19:37.517 ************************************ 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:37.517 ************************************ 00:19:37.517 END TEST nvmf_host 00:19:37.517 ************************************ 00:19:37.517 00:19:37.517 real 5m3.111s 00:19:37.517 user 13m12.505s 00:19:37.517 sys 1m7.101s 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:37.517 10:41:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.517 10:41:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:19:37.517 10:41:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:19:37.517 ************************************ 00:19:37.517 END TEST nvmf_tcp 00:19:37.517 ************************************ 00:19:37.517 00:19:37.517 real 12m30.737s 00:19:37.517 user 30m13.232s 00:19:37.517 sys 3m8.090s 00:19:37.517 10:41:26 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:37.517 10:41:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:37.517 10:41:26 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:19:37.517 10:41:26 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:37.517 10:41:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:37.517 10:41:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:37.517 10:41:26 -- common/autotest_common.sh@10 -- # set +x 00:19:37.802 ************************************ 00:19:37.802 START TEST nvmf_dif 00:19:37.802 ************************************ 00:19:37.802 10:41:26 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:37.802 * Looking for test storage... 00:19:37.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:37.802 10:41:26 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:37.802 10:41:26 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:19:37.802 10:41:26 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:37.802 10:41:26 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:37.802 10:41:26 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:37.802 10:41:26 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:37.802 10:41:26 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:37.802 10:41:26 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.802 10:41:26 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:19:37.802 10:41:26 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:19:37.802 10:41:26 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:19:37.802 10:41:26 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:19:37.802 10:41:26 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:19:37.803 10:41:26 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:37.803 10:41:26 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:37.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.803 --rc genhtml_branch_coverage=1 00:19:37.803 --rc genhtml_function_coverage=1 00:19:37.803 --rc genhtml_legend=1 00:19:37.803 --rc geninfo_all_blocks=1 00:19:37.803 --rc geninfo_unexecuted_blocks=1 00:19:37.803 00:19:37.803 ' 00:19:37.803 10:41:26 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:37.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.803 --rc genhtml_branch_coverage=1 00:19:37.803 --rc genhtml_function_coverage=1 00:19:37.803 --rc genhtml_legend=1 00:19:37.803 --rc geninfo_all_blocks=1 00:19:37.803 --rc geninfo_unexecuted_blocks=1 00:19:37.803 00:19:37.803 ' 00:19:37.803 10:41:26 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:37.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.803 --rc genhtml_branch_coverage=1 00:19:37.803 --rc genhtml_function_coverage=1 00:19:37.803 --rc genhtml_legend=1 00:19:37.803 --rc geninfo_all_blocks=1 00:19:37.803 --rc geninfo_unexecuted_blocks=1 00:19:37.803 00:19:37.803 ' 00:19:37.803 10:41:26 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:37.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.803 --rc genhtml_branch_coverage=1 00:19:37.803 --rc genhtml_function_coverage=1 00:19:37.803 --rc genhtml_legend=1 00:19:37.803 --rc geninfo_all_blocks=1 00:19:37.803 --rc geninfo_unexecuted_blocks=1 00:19:37.803 00:19:37.803 ' 00:19:37.803 10:41:26 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.803 10:41:26 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.803 10:41:26 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.803 10:41:26 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.803 10:41:26 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.803 10:41:26 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:37.803 10:41:26 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:37.803 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:37.803 10:41:26 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:37.803 10:41:26 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:37.803 10:41:26 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:37.803 10:41:26 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:37.803 10:41:26 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.803 10:41:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:37.803 10:41:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:37.803 Cannot find device "nvmf_init_br" 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:37.803 Cannot find device "nvmf_init_br2" 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:37.803 Cannot find device "nvmf_tgt_br" 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@164 -- # true 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.803 Cannot find device "nvmf_tgt_br2" 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@165 -- # true 00:19:37.803 10:41:26 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:37.803 Cannot find device "nvmf_init_br" 00:19:37.804 10:41:26 nvmf_dif -- nvmf/common.sh@166 -- # true 00:19:37.804 10:41:26 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:37.804 Cannot find device "nvmf_init_br2" 00:19:37.804 10:41:26 nvmf_dif -- nvmf/common.sh@167 -- # true 00:19:37.804 10:41:26 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:37.804 Cannot find device "nvmf_tgt_br" 00:19:37.804 10:41:26 nvmf_dif -- nvmf/common.sh@168 -- # true 00:19:37.804 10:41:26 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:38.093 Cannot find device "nvmf_tgt_br2" 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@169 -- # true 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:38.093 Cannot find device "nvmf_br" 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@170 -- # true 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:38.093 Cannot find device "nvmf_init_if" 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@171 -- # true 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:38.093 Cannot find device "nvmf_init_if2" 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@172 -- # true 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:38.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@173 -- # true 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:38.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@174 -- # true 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:38.093 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:38.093 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:19:38.093 00:19:38.093 --- 10.0.0.3 ping statistics --- 00:19:38.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.093 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:38.093 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:38.093 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:19:38.093 00:19:38.093 --- 10.0.0.4 ping statistics --- 00:19:38.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.093 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:38.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:38.093 00:19:38.093 --- 10.0.0.1 ping statistics --- 00:19:38.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.093 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:38.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:19:38.093 00:19:38.093 --- 10.0.0.2 ping statistics --- 00:19:38.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.093 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:19:38.093 10:41:26 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:38.662 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:38.662 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:38.662 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:38.662 10:41:27 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.662 10:41:27 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:38.662 10:41:27 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:38.662 10:41:27 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.662 10:41:27 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:38.662 10:41:27 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:38.662 10:41:27 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:38.662 10:41:27 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:38.662 10:41:27 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:38.662 10:41:27 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.662 10:41:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:38.662 10:41:27 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82457 00:19:38.662 10:41:27 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82457 00:19:38.662 10:41:27 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:38.662 10:41:27 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 82457 ']' 00:19:38.662 10:41:27 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.662 10:41:27 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:38.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.662 10:41:27 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.662 10:41:27 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:38.662 10:41:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:38.662 [2024-11-12 10:41:27.272391] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:19:38.662 [2024-11-12 10:41:27.272486] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.921 [2024-11-12 10:41:27.426472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.921 [2024-11-12 10:41:27.463533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.921 [2024-11-12 10:41:27.463599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.921 [2024-11-12 10:41:27.463612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.921 [2024-11-12 10:41:27.463622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.921 [2024-11-12 10:41:27.463631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.921 [2024-11-12 10:41:27.463983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.921 [2024-11-12 10:41:27.497269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:38.921 10:41:27 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:38.921 10:41:27 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:19:38.921 10:41:27 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:38.921 10:41:27 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:38.921 10:41:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:38.921 10:41:27 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.921 10:41:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:38.921 10:41:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:38.921 10:41:27 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.921 10:41:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:38.921 [2024-11-12 10:41:27.594746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.921 10:41:27 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.921 10:41:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:38.921 10:41:27 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:38.921 10:41:27 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:38.921 10:41:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:38.921 ************************************ 00:19:38.921 START TEST fio_dif_1_default 00:19:38.921 ************************************ 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:38.921 bdev_null0 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:38.921 [2024-11-12 10:41:27.638822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:38.921 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:38.922 { 00:19:38.922 "params": { 00:19:38.922 "name": "Nvme$subsystem", 00:19:38.922 "trtype": "$TEST_TRANSPORT", 00:19:38.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.922 "adrfam": "ipv4", 00:19:38.922 "trsvcid": "$NVMF_PORT", 00:19:38.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.922 "hdgst": ${hdgst:-false}, 00:19:38.922 "ddgst": ${ddgst:-false} 00:19:38.922 }, 00:19:38.922 "method": "bdev_nvme_attach_controller" 00:19:38.922 } 00:19:38.922 EOF 00:19:38.922 )") 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:19:38.922 10:41:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:38.922 "params": { 00:19:38.922 "name": "Nvme0", 00:19:38.922 "trtype": "tcp", 00:19:38.922 "traddr": "10.0.0.3", 00:19:38.922 "adrfam": "ipv4", 00:19:38.922 "trsvcid": "4420", 00:19:38.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:38.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:38.922 "hdgst": false, 00:19:38.922 "ddgst": false 00:19:38.922 }, 00:19:38.922 "method": "bdev_nvme_attach_controller" 00:19:38.922 }' 00:19:39.181 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:39.181 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:39.181 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:39.181 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.181 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:19:39.181 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:39.181 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:39.181 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:39.181 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:39.181 10:41:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:39.181 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:39.181 fio-3.35 00:19:39.181 Starting 1 thread 00:19:51.390 00:19:51.390 filename0: (groupid=0, jobs=1): err= 0: pid=82516: Tue Nov 12 10:41:38 2024 00:19:51.390 read: IOPS=9100, BW=35.5MiB/s (37.3MB/s)(356MiB/10001msec) 00:19:51.390 slat (usec): min=6, max=162, avg= 8.33, stdev= 3.90 00:19:51.390 clat (usec): min=331, max=2839, avg=414.67, stdev=55.20 00:19:51.390 lat (usec): min=337, max=2869, avg=423.00, stdev=56.22 00:19:51.390 clat percentiles (usec): 00:19:51.390 | 1.00th=[ 338], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 367], 00:19:51.390 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 416], 00:19:51.390 | 70.00th=[ 437], 80.00th=[ 461], 90.00th=[ 494], 95.00th=[ 515], 00:19:51.390 | 99.00th=[ 553], 99.50th=[ 562], 99.90th=[ 594], 99.95th=[ 619], 00:19:51.390 | 99.99th=[ 701] 00:19:51.390 bw ( KiB/s): min=31489, max=39680, per=100.00%, avg=36584.47, stdev=2180.12, samples=19 00:19:51.390 iops : min= 7872, max= 9920, avg=9146.11, stdev=545.06, samples=19 00:19:51.390 lat (usec) : 500=91.65%, 750=8.35% 00:19:51.390 lat (msec) : 2=0.01%, 4=0.01% 00:19:51.390 cpu : usr=85.55%, sys=12.53%, ctx=22, majf=0, minf=9 00:19:51.390 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.390 issued rwts: total=91016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.390 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:51.390 00:19:51.390 Run status group 0 (all jobs): 00:19:51.390 READ: bw=35.5MiB/s (37.3MB/s), 35.5MiB/s-35.5MiB/s (37.3MB/s-37.3MB/s), io=356MiB (373MB), run=10001-10001msec 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 ************************************ 00:19:51.390 END TEST fio_dif_1_default 00:19:51.390 ************************************ 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.390 00:19:51.390 real 0m10.979s 00:19:51.390 user 0m9.188s 00:19:51.390 sys 0m1.503s 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 10:41:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:51.390 10:41:38 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:51.390 10:41:38 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:51.390 10:41:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 ************************************ 00:19:51.390 START TEST fio_dif_1_multi_subsystems 00:19:51.390 ************************************ 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 bdev_null0 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 [2024-11-12 10:41:38.675403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 bdev_null1 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:51.390 { 00:19:51.390 "params": { 00:19:51.390 "name": "Nvme$subsystem", 00:19:51.390 "trtype": "$TEST_TRANSPORT", 00:19:51.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.390 "adrfam": "ipv4", 00:19:51.390 "trsvcid": "$NVMF_PORT", 00:19:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.390 "hdgst": ${hdgst:-false}, 00:19:51.390 "ddgst": ${ddgst:-false} 00:19:51.390 }, 00:19:51.390 "method": "bdev_nvme_attach_controller" 00:19:51.390 } 00:19:51.390 EOF 00:19:51.390 )") 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:51.390 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:51.391 { 00:19:51.391 "params": { 00:19:51.391 "name": "Nvme$subsystem", 00:19:51.391 "trtype": "$TEST_TRANSPORT", 00:19:51.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.391 "adrfam": "ipv4", 00:19:51.391 "trsvcid": "$NVMF_PORT", 00:19:51.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.391 "hdgst": ${hdgst:-false}, 00:19:51.391 "ddgst": ${ddgst:-false} 00:19:51.391 }, 00:19:51.391 "method": "bdev_nvme_attach_controller" 00:19:51.391 } 00:19:51.391 EOF 00:19:51.391 )") 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:51.391 "params": { 00:19:51.391 "name": "Nvme0", 00:19:51.391 "trtype": "tcp", 00:19:51.391 "traddr": "10.0.0.3", 00:19:51.391 "adrfam": "ipv4", 00:19:51.391 "trsvcid": "4420", 00:19:51.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:51.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:51.391 "hdgst": false, 00:19:51.391 "ddgst": false 00:19:51.391 }, 00:19:51.391 "method": "bdev_nvme_attach_controller" 00:19:51.391 },{ 00:19:51.391 "params": { 00:19:51.391 "name": "Nvme1", 00:19:51.391 "trtype": "tcp", 00:19:51.391 "traddr": "10.0.0.3", 00:19:51.391 "adrfam": "ipv4", 00:19:51.391 "trsvcid": "4420", 00:19:51.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.391 "hdgst": false, 00:19:51.391 "ddgst": false 00:19:51.391 }, 00:19:51.391 "method": "bdev_nvme_attach_controller" 00:19:51.391 }' 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:51.391 10:41:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.391 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:51.391 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:51.391 fio-3.35 00:19:51.391 Starting 2 threads 00:20:01.369 00:20:01.370 filename0: (groupid=0, jobs=1): err= 0: pid=82676: Tue Nov 12 10:41:49 2024 00:20:01.370 read: IOPS=4986, BW=19.5MiB/s (20.4MB/s)(195MiB/10001msec) 00:20:01.370 slat (nsec): min=6297, max=70726, avg=12902.93, stdev=4964.49 00:20:01.370 clat (usec): min=575, max=1274, avg=767.12, stdev=75.29 00:20:01.370 lat (usec): min=582, max=1286, avg=780.02, stdev=76.49 00:20:01.370 clat percentiles (usec): 00:20:01.370 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 701], 00:20:01.370 | 30.00th=[ 717], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 775], 00:20:01.370 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 873], 95.00th=[ 906], 00:20:01.370 | 99.00th=[ 947], 99.50th=[ 971], 99.90th=[ 1020], 99.95th=[ 1037], 00:20:01.370 | 99.99th=[ 1237] 00:20:01.370 bw ( KiB/s): min=17920, max=20960, per=49.87%, avg=19893.53, stdev=956.47, samples=19 00:20:01.370 iops : min= 4480, max= 5240, avg=4973.37, stdev=239.13, samples=19 00:20:01.370 lat (usec) : 750=46.66%, 1000=53.17% 00:20:01.370 lat (msec) : 2=0.18% 00:20:01.370 cpu : usr=89.28%, sys=9.39%, ctx=12, majf=0, minf=0 00:20:01.370 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.370 issued rwts: total=49872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.370 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:01.370 filename1: (groupid=0, jobs=1): err= 0: pid=82677: Tue Nov 12 10:41:49 2024 00:20:01.370 read: IOPS=4986, BW=19.5MiB/s (20.4MB/s)(195MiB/10001msec) 00:20:01.370 slat (nsec): min=6240, max=70558, avg=12918.39, stdev=5026.40 00:20:01.370 clat (usec): min=611, max=1253, avg=766.34, stdev=69.96 00:20:01.370 lat (usec): min=623, max=1268, avg=779.26, stdev=70.69 00:20:01.370 clat percentiles (usec): 00:20:01.370 | 1.00th=[ 652], 5.00th=[ 668], 10.00th=[ 685], 20.00th=[ 701], 00:20:01.370 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 758], 60.00th=[ 775], 00:20:01.370 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 889], 00:20:01.370 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1004], 99.95th=[ 1020], 00:20:01.370 | 99.99th=[ 1188] 00:20:01.370 bw ( KiB/s): min=17920, max=20960, per=49.87%, avg=19893.53, stdev=956.47, samples=19 00:20:01.370 iops : min= 4480, max= 5240, avg=4973.37, stdev=239.13, samples=19 00:20:01.370 lat (usec) : 750=48.04%, 1000=51.86% 00:20:01.370 lat (msec) : 2=0.10% 00:20:01.370 cpu : usr=89.71%, sys=8.92%, ctx=11, majf=0, minf=0 00:20:01.370 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.370 issued rwts: total=49872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.370 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:01.370 00:20:01.370 Run status group 0 (all jobs): 00:20:01.370 READ: bw=39.0MiB/s (40.9MB/s), 19.5MiB/s-19.5MiB/s (20.4MB/s-20.4MB/s), io=390MiB (409MB), run=10001-10001msec 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:01.370 ************************************ 00:20:01.370 END TEST fio_dif_1_multi_subsystems 00:20:01.370 ************************************ 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.370 00:20:01.370 real 0m11.081s 00:20:01.370 user 0m18.648s 00:20:01.370 sys 0m2.101s 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:01.370 10:41:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:01.370 10:41:49 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:01.370 10:41:49 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:01.370 10:41:49 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:01.370 10:41:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:01.370 ************************************ 00:20:01.370 START TEST fio_dif_rand_params 00:20:01.370 ************************************ 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.370 bdev_null0 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.370 [2024-11-12 10:41:49.815252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:01.370 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.371 { 00:20:01.371 "params": { 00:20:01.371 "name": "Nvme$subsystem", 00:20:01.371 "trtype": "$TEST_TRANSPORT", 00:20:01.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.371 "adrfam": "ipv4", 00:20:01.371 "trsvcid": "$NVMF_PORT", 00:20:01.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.371 "hdgst": ${hdgst:-false}, 00:20:01.371 "ddgst": ${ddgst:-false} 00:20:01.371 }, 00:20:01.371 "method": "bdev_nvme_attach_controller" 00:20:01.371 } 00:20:01.371 EOF 00:20:01.371 )") 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:01.371 "params": { 00:20:01.371 "name": "Nvme0", 00:20:01.371 "trtype": "tcp", 00:20:01.371 "traddr": "10.0.0.3", 00:20:01.371 "adrfam": "ipv4", 00:20:01.371 "trsvcid": "4420", 00:20:01.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.371 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:01.371 "hdgst": false, 00:20:01.371 "ddgst": false 00:20:01.371 }, 00:20:01.371 "method": "bdev_nvme_attach_controller" 00:20:01.371 }' 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:01.371 10:41:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.371 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:01.371 ... 00:20:01.371 fio-3.35 00:20:01.371 Starting 3 threads 00:20:07.937 00:20:07.937 filename0: (groupid=0, jobs=1): err= 0: pid=82827: Tue Nov 12 10:41:55 2024 00:20:07.937 read: IOPS=269, BW=33.7MiB/s (35.4MB/s)(169MiB/5005msec) 00:20:07.937 slat (nsec): min=5118, max=54270, avg=14628.03, stdev=4800.89 00:20:07.937 clat (usec): min=10212, max=13237, avg=11088.94, stdev=454.90 00:20:07.937 lat (usec): min=10224, max=13254, avg=11103.57, stdev=455.48 00:20:07.937 clat percentiles (usec): 00:20:07.937 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10552], 20.00th=[10683], 00:20:07.937 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:20:07.937 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 00:20:07.937 | 99.00th=[12125], 99.50th=[12256], 99.90th=[13173], 99.95th=[13173], 00:20:07.937 | 99.99th=[13173] 00:20:07.937 bw ( KiB/s): min=33024, max=36096, per=33.30%, avg=34482.33, stdev=1048.68, samples=9 00:20:07.937 iops : min= 258, max= 282, avg=269.33, stdev= 8.19, samples=9 00:20:07.937 lat (msec) : 20=100.00% 00:20:07.937 cpu : usr=90.47%, sys=9.01%, ctx=8, majf=0, minf=0 00:20:07.937 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:07.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.937 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.937 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:07.937 filename0: (groupid=0, jobs=1): err= 0: pid=82828: Tue Nov 12 10:41:55 2024 00:20:07.937 read: IOPS=269, BW=33.7MiB/s (35.4MB/s)(169MiB/5003msec) 00:20:07.937 slat (nsec): min=6481, max=48802, avg=14767.22, stdev=4759.65 00:20:07.937 clat (usec): min=10220, max=12273, avg=11083.47, stdev=443.10 00:20:07.937 lat (usec): min=10232, max=12306, avg=11098.23, stdev=443.80 00:20:07.937 clat percentiles (usec): 00:20:07.937 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10552], 20.00th=[10683], 00:20:07.937 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:20:07.937 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 00:20:07.937 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12256], 99.95th=[12256], 00:20:07.937 | 99.99th=[12256] 00:20:07.937 bw ( KiB/s): min=33024, max=35328, per=33.29%, avg=34474.67, stdev=896.00, samples=9 00:20:07.937 iops : min= 258, max= 276, avg=269.33, stdev= 7.00, samples=9 00:20:07.937 lat (msec) : 20=100.00% 00:20:07.937 cpu : usr=90.14%, sys=9.32%, ctx=12, majf=0, minf=0 00:20:07.937 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:07.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.937 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.937 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:07.937 filename0: (groupid=0, jobs=1): err= 0: pid=82829: Tue Nov 12 10:41:55 2024 00:20:07.937 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(169MiB/5006msec) 00:20:07.937 slat (nsec): min=4236, max=64145, avg=14096.07, stdev=5062.85 00:20:07.937 clat (usec): min=9796, max=14549, avg=11092.47, stdev=475.55 00:20:07.937 lat (usec): min=9803, max=14566, avg=11106.56, stdev=476.20 00:20:07.937 clat percentiles (usec): 00:20:07.937 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10552], 20.00th=[10683], 00:20:07.937 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:20:07.937 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 00:20:07.937 | 99.00th=[12125], 99.50th=[12256], 99.90th=[14484], 99.95th=[14615], 00:20:07.937 | 99.99th=[14615] 00:20:07.937 bw ( KiB/s): min=33024, max=36096, per=33.29%, avg=34474.67, stdev=1047.73, samples=9 00:20:07.937 iops : min= 258, max= 282, avg=269.33, stdev= 8.19, samples=9 00:20:07.937 lat (msec) : 10=0.22%, 20=99.78% 00:20:07.937 cpu : usr=90.27%, sys=8.75%, ctx=91, majf=0, minf=0 00:20:07.937 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:07.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.938 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.938 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:07.938 00:20:07.938 Run status group 0 (all jobs): 00:20:07.938 READ: bw=101MiB/s (106MB/s), 33.7MiB/s-33.7MiB/s (35.3MB/s-35.4MB/s), io=506MiB (531MB), run=5003-5006msec 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 bdev_null0 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 [2024-11-12 10:41:55.743312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 bdev_null1 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 bdev_null2 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.938 { 00:20:07.938 "params": { 00:20:07.938 "name": "Nvme$subsystem", 00:20:07.938 "trtype": "$TEST_TRANSPORT", 00:20:07.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.938 "adrfam": "ipv4", 00:20:07.938 "trsvcid": "$NVMF_PORT", 00:20:07.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.938 "hdgst": ${hdgst:-false}, 00:20:07.938 "ddgst": ${ddgst:-false} 00:20:07.938 }, 00:20:07.938 "method": "bdev_nvme_attach_controller" 00:20:07.938 } 00:20:07.938 EOF 00:20:07.938 )") 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:07.938 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.938 { 00:20:07.938 "params": { 00:20:07.938 "name": "Nvme$subsystem", 00:20:07.938 "trtype": "$TEST_TRANSPORT", 00:20:07.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.938 "adrfam": "ipv4", 00:20:07.938 "trsvcid": "$NVMF_PORT", 00:20:07.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.939 "hdgst": ${hdgst:-false}, 00:20:07.939 "ddgst": ${ddgst:-false} 00:20:07.939 }, 00:20:07.939 "method": "bdev_nvme_attach_controller" 00:20:07.939 } 00:20:07.939 EOF 00:20:07.939 )") 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.939 { 00:20:07.939 "params": { 00:20:07.939 "name": "Nvme$subsystem", 00:20:07.939 "trtype": "$TEST_TRANSPORT", 00:20:07.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.939 "adrfam": "ipv4", 00:20:07.939 "trsvcid": "$NVMF_PORT", 00:20:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.939 "hdgst": ${hdgst:-false}, 00:20:07.939 "ddgst": ${ddgst:-false} 00:20:07.939 }, 00:20:07.939 "method": "bdev_nvme_attach_controller" 00:20:07.939 } 00:20:07.939 EOF 00:20:07.939 )") 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:07.939 "params": { 00:20:07.939 "name": "Nvme0", 00:20:07.939 "trtype": "tcp", 00:20:07.939 "traddr": "10.0.0.3", 00:20:07.939 "adrfam": "ipv4", 00:20:07.939 "trsvcid": "4420", 00:20:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:07.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:07.939 "hdgst": false, 00:20:07.939 "ddgst": false 00:20:07.939 }, 00:20:07.939 "method": "bdev_nvme_attach_controller" 00:20:07.939 },{ 00:20:07.939 "params": { 00:20:07.939 "name": "Nvme1", 00:20:07.939 "trtype": "tcp", 00:20:07.939 "traddr": "10.0.0.3", 00:20:07.939 "adrfam": "ipv4", 00:20:07.939 "trsvcid": "4420", 00:20:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.939 "hdgst": false, 00:20:07.939 "ddgst": false 00:20:07.939 }, 00:20:07.939 "method": "bdev_nvme_attach_controller" 00:20:07.939 },{ 00:20:07.939 "params": { 00:20:07.939 "name": "Nvme2", 00:20:07.939 "trtype": "tcp", 00:20:07.939 "traddr": "10.0.0.3", 00:20:07.939 "adrfam": "ipv4", 00:20:07.939 "trsvcid": "4420", 00:20:07.939 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:07.939 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:07.939 "hdgst": false, 00:20:07.939 "ddgst": false 00:20:07.939 }, 00:20:07.939 "method": "bdev_nvme_attach_controller" 00:20:07.939 }' 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:07.939 10:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.939 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:07.939 ... 00:20:07.939 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:07.939 ... 00:20:07.939 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:07.939 ... 00:20:07.939 fio-3.35 00:20:07.939 Starting 24 threads 00:20:20.185 00:20:20.185 filename0: (groupid=0, jobs=1): err= 0: pid=82924: Tue Nov 12 10:42:06 2024 00:20:20.185 read: IOPS=233, BW=933KiB/s (955kB/s)(9408KiB/10087msec) 00:20:20.185 slat (usec): min=7, max=8026, avg=22.78, stdev=286.03 00:20:20.185 clat (usec): min=1482, max=143836, avg=68409.34, stdev=27196.24 00:20:20.185 lat (usec): min=1490, max=143851, avg=68432.13, stdev=27207.99 00:20:20.185 clat percentiles (usec): 00:20:20.185 | 1.00th=[ 1680], 5.00th=[ 5604], 10.00th=[ 30802], 20.00th=[ 47973], 00:20:20.185 | 30.00th=[ 60031], 40.00th=[ 71828], 50.00th=[ 71828], 60.00th=[ 72877], 00:20:20.185 | 70.00th=[ 81265], 80.00th=[ 85459], 90.00th=[100140], 95.00th=[108528], 00:20:20.185 | 99.00th=[120062], 99.50th=[131597], 99.90th=[141558], 99.95th=[143655], 00:20:20.185 | 99.99th=[143655] 00:20:20.185 bw ( KiB/s): min= 656, max= 2555, per=4.14%, avg=933.35, stdev=403.98, samples=20 00:20:20.185 iops : min= 164, max= 638, avg=233.30, stdev=100.84, samples=20 00:20:20.185 lat (msec) : 2=2.21%, 4=1.28%, 10=2.93%, 20=0.98%, 50=16.28% 00:20:20.185 lat (msec) : 100=66.71%, 250=9.61% 00:20:20.185 cpu : usr=31.21%, sys=1.98%, ctx=896, majf=0, minf=0 00:20:20.185 IO depths : 1=0.2%, 2=2.0%, 4=7.1%, 8=74.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:20.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.185 complete : 0=0.0%, 4=89.7%, 8=8.7%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.185 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.185 filename0: (groupid=0, jobs=1): err= 0: pid=82925: Tue Nov 12 10:42:06 2024 00:20:20.185 read: IOPS=230, BW=920KiB/s (943kB/s)(9236KiB/10034msec) 00:20:20.185 slat (usec): min=4, max=8026, avg=21.21, stdev=235.72 00:20:20.185 clat (msec): min=25, max=131, avg=69.42, stdev=20.14 00:20:20.185 lat (msec): min=26, max=131, avg=69.45, stdev=20.13 00:20:20.185 clat percentiles (msec): 00:20:20.185 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 48], 00:20:20.185 | 30.00th=[ 56], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:20:20.185 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 100], 95.00th=[ 108], 00:20:20.185 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:20:20.185 | 99.99th=[ 132] 00:20:20.185 bw ( KiB/s): min= 656, max= 1352, per=4.07%, avg=917.20, stdev=156.88, samples=20 00:20:20.185 iops : min= 164, max= 338, avg=229.30, stdev=39.22, samples=20 00:20:20.185 lat (msec) : 50=23.78%, 100=66.52%, 250=9.70% 00:20:20.185 cpu : usr=34.04%, sys=1.92%, ctx=1187, majf=0, minf=9 00:20:20.185 IO depths : 1=0.1%, 2=1.1%, 4=4.1%, 8=79.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:20.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.185 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.185 issued rwts: total=2309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.185 filename0: (groupid=0, jobs=1): err= 0: pid=82926: Tue Nov 12 10:42:06 2024 00:20:20.185 read: IOPS=218, BW=875KiB/s (896kB/s)(8804KiB/10059msec) 00:20:20.185 slat (usec): min=4, max=12026, avg=23.04, stdev=307.78 00:20:20.185 clat (msec): min=9, max=144, avg=72.88, stdev=23.10 00:20:20.185 lat (msec): min=9, max=144, avg=72.91, stdev=23.10 00:20:20.185 clat percentiles (msec): 00:20:20.185 | 1.00th=[ 16], 5.00th=[ 29], 10.00th=[ 36], 20.00th=[ 57], 00:20:20.185 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 77], 00:20:20.185 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 107], 95.00th=[ 115], 00:20:20.185 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 140], 99.95th=[ 144], 00:20:20.185 | 99.99th=[ 144] 00:20:20.185 bw ( KiB/s): min= 632, max= 1650, per=3.88%, avg=874.95, stdev=222.92, samples=20 00:20:20.185 iops : min= 158, max= 412, avg=218.70, stdev=55.65, samples=20 00:20:20.185 lat (msec) : 10=0.09%, 20=1.91%, 50=16.45%, 100=69.83%, 250=11.72% 00:20:20.185 cpu : usr=36.68%, sys=2.16%, ctx=1141, majf=0, minf=9 00:20:20.185 IO depths : 1=0.1%, 2=2.8%, 4=11.2%, 8=70.9%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:20.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.185 complete : 0=0.0%, 4=90.7%, 8=6.9%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.185 issued rwts: total=2201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.185 filename0: (groupid=0, jobs=1): err= 0: pid=82927: Tue Nov 12 10:42:06 2024 00:20:20.185 read: IOPS=205, BW=821KiB/s (841kB/s)(8256KiB/10056msec) 00:20:20.185 slat (usec): min=5, max=6958, avg=18.76, stdev=160.15 00:20:20.185 clat (msec): min=14, max=127, avg=77.72, stdev=22.71 00:20:20.185 lat (msec): min=14, max=127, avg=77.74, stdev=22.72 00:20:20.185 clat percentiles (msec): 00:20:20.185 | 1.00th=[ 16], 5.00th=[ 28], 10.00th=[ 46], 20.00th=[ 69], 00:20:20.185 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 79], 60.00th=[ 81], 00:20:20.185 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 116], 00:20:20.185 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 128], 00:20:20.185 | 99.99th=[ 128] 00:20:20.185 bw ( KiB/s): min= 624, max= 1650, per=3.62%, avg=817.80, stdev=228.60, samples=20 00:20:20.185 iops : min= 156, max= 412, avg=204.40, stdev=57.06, samples=20 00:20:20.185 lat (msec) : 20=2.13%, 50=10.17%, 100=73.11%, 250=14.58% 00:20:20.185 cpu : usr=45.73%, sys=2.99%, ctx=1419, majf=0, minf=9 00:20:20.185 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:20:20.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.185 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.185 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.185 filename0: (groupid=0, jobs=1): err= 0: pid=82928: Tue Nov 12 10:42:06 2024 00:20:20.185 read: IOPS=242, BW=971KiB/s (994kB/s)(9756KiB/10050msec) 00:20:20.185 slat (usec): min=3, max=4023, avg=18.90, stdev=140.62 00:20:20.185 clat (msec): min=15, max=125, avg=65.71, stdev=21.34 00:20:20.185 lat (msec): min=15, max=125, avg=65.73, stdev=21.34 00:20:20.185 clat percentiles (msec): 00:20:20.185 | 1.00th=[ 24], 5.00th=[ 29], 10.00th=[ 39], 20.00th=[ 48], 00:20:20.185 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:20:20.185 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 91], 95.00th=[ 105], 00:20:20.185 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 126], 00:20:20.185 | 99.99th=[ 126] 00:20:20.185 bw ( KiB/s): min= 664, max= 1576, per=4.31%, avg=971.65, stdev=205.66, samples=20 00:20:20.185 iops : min= 166, max= 394, avg=242.90, stdev=51.42, samples=20 00:20:20.185 lat (msec) : 20=0.12%, 50=27.10%, 100=65.40%, 250=7.38% 00:20:20.185 cpu : usr=42.44%, sys=2.69%, ctx=1237, majf=0, minf=9 00:20:20.185 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:20.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.185 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.185 issued rwts: total=2439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.185 filename0: (groupid=0, jobs=1): err= 0: pid=82929: Tue Nov 12 10:42:06 2024 00:20:20.185 read: IOPS=241, BW=967KiB/s (990kB/s)(9716KiB/10048msec) 00:20:20.185 slat (usec): min=4, max=8024, avg=22.93, stdev=215.09 00:20:20.185 clat (msec): min=19, max=143, avg=65.97, stdev=21.13 00:20:20.185 lat (msec): min=19, max=143, avg=66.00, stdev=21.13 00:20:20.185 clat percentiles (msec): 00:20:20.185 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 48], 00:20:20.185 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:20:20.185 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 92], 95.00th=[ 108], 00:20:20.185 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 140], 00:20:20.185 | 99.99th=[ 144] 00:20:20.185 bw ( KiB/s): min= 680, max= 1464, per=4.29%, avg=967.20, stdev=196.81, samples=20 00:20:20.185 iops : min= 170, max= 366, avg=241.75, stdev=49.14, samples=20 00:20:20.185 lat (msec) : 20=0.08%, 50=28.37%, 100=64.55%, 250=7.00% 00:20:20.185 cpu : usr=35.73%, sys=2.15%, ctx=1108, majf=0, minf=9 00:20:20.185 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=82.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:20.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 issued rwts: total=2429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.186 filename0: (groupid=0, jobs=1): err= 0: pid=82930: Tue Nov 12 10:42:06 2024 00:20:20.186 read: IOPS=237, BW=948KiB/s (971kB/s)(9548KiB/10071msec) 00:20:20.186 slat (usec): min=4, max=8024, avg=24.69, stdev=295.43 00:20:20.186 clat (usec): min=1594, max=131808, avg=67175.86, stdev=22637.10 00:20:20.186 lat (usec): min=1610, max=131825, avg=67200.54, stdev=22638.95 00:20:20.186 clat percentiles (msec): 00:20:20.186 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 48], 00:20:20.186 | 30.00th=[ 59], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.186 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 108], 00:20:20.186 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 132], 00:20:20.186 | 99.99th=[ 132] 00:20:20.186 bw ( KiB/s): min= 688, max= 1972, per=4.20%, avg=947.80, stdev=274.10, samples=20 00:20:20.186 iops : min= 172, max= 493, avg=236.95, stdev=68.53, samples=20 00:20:20.186 lat (msec) : 2=0.08%, 10=2.43%, 20=0.92%, 50=20.44%, 100=69.33% 00:20:20.186 lat (msec) : 250=6.79% 00:20:20.186 cpu : usr=32.27%, sys=2.05%, ctx=932, majf=0, minf=0 00:20:20.186 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=80.9%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 issued rwts: total=2387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.186 filename0: (groupid=0, jobs=1): err= 0: pid=82931: Tue Nov 12 10:42:06 2024 00:20:20.186 read: IOPS=238, BW=953KiB/s (976kB/s)(9572KiB/10043msec) 00:20:20.186 slat (usec): min=3, max=4023, avg=18.09, stdev=111.67 00:20:20.186 clat (msec): min=16, max=144, avg=67.04, stdev=21.68 00:20:20.186 lat (msec): min=16, max=144, avg=67.06, stdev=21.68 00:20:20.186 clat percentiles (msec): 00:20:20.186 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 41], 20.00th=[ 48], 00:20:20.186 | 30.00th=[ 53], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:20:20.186 | 70.00th=[ 78], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 107], 00:20:20.186 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 138], 00:20:20.186 | 99.99th=[ 146] 00:20:20.186 bw ( KiB/s): min= 656, max= 1496, per=4.21%, avg=950.80, stdev=201.47, samples=20 00:20:20.186 iops : min= 164, max= 374, avg=237.70, stdev=50.37, samples=20 00:20:20.186 lat (msec) : 20=0.59%, 50=26.03%, 100=65.27%, 250=8.11% 00:20:20.186 cpu : usr=40.30%, sys=2.67%, ctx=1214, majf=0, minf=9 00:20:20.186 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 issued rwts: total=2393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.186 filename1: (groupid=0, jobs=1): err= 0: pid=82932: Tue Nov 12 10:42:06 2024 00:20:20.186 read: IOPS=247, BW=990KiB/s (1014kB/s)(9904KiB/10003msec) 00:20:20.186 slat (usec): min=4, max=8024, avg=24.64, stdev=254.57 00:20:20.186 clat (usec): min=1924, max=129340, avg=64527.45, stdev=20407.51 00:20:20.186 lat (usec): min=1932, max=129360, avg=64552.09, stdev=20402.07 00:20:20.186 clat percentiles (msec): 00:20:20.186 | 1.00th=[ 3], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:20:20.186 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 69], 60.00th=[ 72], 00:20:20.186 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 106], 00:20:20.186 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:20:20.186 | 99.99th=[ 130] 00:20:20.186 bw ( KiB/s): min= 720, max= 1400, per=4.33%, avg=977.16, stdev=138.46, samples=19 00:20:20.186 iops : min= 180, max= 350, avg=244.21, stdev=34.57, samples=19 00:20:20.186 lat (msec) : 2=0.24%, 4=1.05%, 10=0.12%, 20=0.24%, 50=29.48% 00:20:20.186 lat (msec) : 100=63.17%, 250=5.69% 00:20:20.186 cpu : usr=35.85%, sys=2.41%, ctx=1015, majf=0, minf=9 00:20:20.186 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=83.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 issued rwts: total=2476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.186 filename1: (groupid=0, jobs=1): err= 0: pid=82933: Tue Nov 12 10:42:06 2024 00:20:20.186 read: IOPS=236, BW=948KiB/s (971kB/s)(9496KiB/10019msec) 00:20:20.186 slat (usec): min=4, max=12032, avg=19.93, stdev=246.70 00:20:20.186 clat (msec): min=21, max=128, avg=67.36, stdev=20.03 00:20:20.186 lat (msec): min=21, max=128, avg=67.38, stdev=20.02 00:20:20.186 clat percentiles (msec): 00:20:20.186 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:20:20.186 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.186 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 108], 00:20:20.186 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:20:20.186 | 99.99th=[ 129] 00:20:20.186 bw ( KiB/s): min= 696, max= 1536, per=4.20%, avg=946.00, stdev=168.29, samples=20 00:20:20.186 iops : min= 174, max= 384, avg=236.50, stdev=42.07, samples=20 00:20:20.186 lat (msec) : 50=26.41%, 100=67.40%, 250=6.19% 00:20:20.186 cpu : usr=30.98%, sys=1.96%, ctx=875, majf=0, minf=9 00:20:20.186 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.9%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 complete : 0=0.0%, 4=87.8%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 issued rwts: total=2374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.186 filename1: (groupid=0, jobs=1): err= 0: pid=82934: Tue Nov 12 10:42:06 2024 00:20:20.186 read: IOPS=234, BW=939KiB/s (961kB/s)(9392KiB/10004msec) 00:20:20.186 slat (usec): min=4, max=8026, avg=30.82, stdev=369.39 00:20:20.186 clat (msec): min=13, max=121, avg=68.05, stdev=20.51 00:20:20.186 lat (msec): min=13, max=121, avg=68.08, stdev=20.52 00:20:20.186 clat percentiles (msec): 00:20:20.186 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 47], 20.00th=[ 48], 00:20:20.186 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.186 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 108], 00:20:20.186 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:20:20.186 | 99.99th=[ 122] 00:20:20.186 bw ( KiB/s): min= 664, max= 1536, per=4.16%, avg=938.53, stdev=183.59, samples=19 00:20:20.186 iops : min= 166, max= 384, avg=234.63, stdev=45.90, samples=19 00:20:20.186 lat (msec) : 20=0.51%, 50=24.74%, 100=67.93%, 250=6.81% 00:20:20.186 cpu : usr=30.91%, sys=2.03%, ctx=883, majf=0, minf=9 00:20:20.186 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=77.0%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 complete : 0=0.0%, 4=88.6%, 8=10.0%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 issued rwts: total=2348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.186 filename1: (groupid=0, jobs=1): err= 0: pid=82935: Tue Nov 12 10:42:06 2024 00:20:20.186 read: IOPS=239, BW=957KiB/s (980kB/s)(9632KiB/10060msec) 00:20:20.186 slat (usec): min=4, max=8023, avg=16.11, stdev=163.32 00:20:20.186 clat (usec): min=1542, max=146890, avg=66703.91, stdev=23791.51 00:20:20.186 lat (usec): min=1558, max=146904, avg=66720.02, stdev=23792.60 00:20:20.186 clat percentiles (msec): 00:20:20.186 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 48], 00:20:20.186 | 30.00th=[ 57], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.186 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:20:20.186 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 144], 00:20:20.186 | 99.99th=[ 148] 00:20:20.186 bw ( KiB/s): min= 736, max= 1984, per=4.25%, avg=958.00, stdev=286.11, samples=20 00:20:20.186 iops : min= 184, max= 496, avg=239.50, stdev=71.53, samples=20 00:20:20.186 lat (msec) : 2=0.08%, 10=1.83%, 20=1.74%, 50=22.22%, 100=66.65% 00:20:20.186 lat (msec) : 250=7.48% 00:20:20.186 cpu : usr=31.36%, sys=1.97%, ctx=1044, majf=0, minf=9 00:20:20.186 IO depths : 1=0.1%, 2=0.3%, 4=1.5%, 8=81.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 issued rwts: total=2408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.186 filename1: (groupid=0, jobs=1): err= 0: pid=82936: Tue Nov 12 10:42:06 2024 00:20:20.186 read: IOPS=243, BW=972KiB/s (995kB/s)(9748KiB/10028msec) 00:20:20.186 slat (usec): min=4, max=8029, avg=28.23, stdev=277.49 00:20:20.186 clat (msec): min=23, max=124, avg=65.65, stdev=20.11 00:20:20.186 lat (msec): min=23, max=124, avg=65.68, stdev=20.10 00:20:20.186 clat percentiles (msec): 00:20:20.186 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 41], 20.00th=[ 48], 00:20:20.186 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:20:20.186 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 88], 95.00th=[ 107], 00:20:20.186 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 126], 00:20:20.186 | 99.99th=[ 126] 00:20:20.186 bw ( KiB/s): min= 672, max= 1480, per=4.29%, avg=968.40, stdev=168.24, samples=20 00:20:20.186 iops : min= 168, max= 370, avg=242.10, stdev=42.06, samples=20 00:20:20.186 lat (msec) : 50=29.38%, 100=64.75%, 250=5.87% 00:20:20.186 cpu : usr=38.35%, sys=2.01%, ctx=1204, majf=0, minf=9 00:20:20.186 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:20.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.186 issued rwts: total=2437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.186 filename1: (groupid=0, jobs=1): err= 0: pid=82937: Tue Nov 12 10:42:06 2024 00:20:20.186 read: IOPS=243, BW=975KiB/s (999kB/s)(9800KiB/10049msec) 00:20:20.186 slat (usec): min=3, max=4031, avg=24.18, stdev=189.66 00:20:20.186 clat (msec): min=18, max=139, avg=65.39, stdev=20.24 00:20:20.186 lat (msec): min=18, max=139, avg=65.42, stdev=20.24 00:20:20.186 clat percentiles (msec): 00:20:20.186 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 48], 00:20:20.186 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 72], 00:20:20.187 | 70.00th=[ 75], 80.00th=[ 80], 90.00th=[ 88], 95.00th=[ 106], 00:20:20.187 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:20:20.187 | 99.99th=[ 140] 00:20:20.187 bw ( KiB/s): min= 688, max= 1448, per=4.33%, avg=975.85, stdev=180.12, samples=20 00:20:20.187 iops : min= 172, max= 362, avg=243.95, stdev=45.03, samples=20 00:20:20.187 lat (msec) : 20=0.24%, 50=26.29%, 100=67.27%, 250=6.20% 00:20:20.187 cpu : usr=42.16%, sys=2.05%, ctx=1311, majf=0, minf=9 00:20:20.187 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=82.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:20.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 issued rwts: total=2450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.187 filename1: (groupid=0, jobs=1): err= 0: pid=82938: Tue Nov 12 10:42:06 2024 00:20:20.187 read: IOPS=239, BW=959KiB/s (982kB/s)(9604KiB/10015msec) 00:20:20.187 slat (usec): min=4, max=4028, avg=22.13, stdev=163.67 00:20:20.187 clat (msec): min=23, max=125, avg=66.62, stdev=19.96 00:20:20.187 lat (msec): min=23, max=125, avg=66.64, stdev=19.96 00:20:20.187 clat percentiles (msec): 00:20:20.187 | 1.00th=[ 24], 5.00th=[ 38], 10.00th=[ 43], 20.00th=[ 48], 00:20:20.187 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:20:20.187 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 107], 00:20:20.187 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 126], 00:20:20.187 | 99.99th=[ 126] 00:20:20.187 bw ( KiB/s): min= 720, max= 1536, per=4.24%, avg=956.40, stdev=157.72, samples=20 00:20:20.187 iops : min= 180, max= 384, avg=239.10, stdev=39.43, samples=20 00:20:20.187 lat (msec) : 50=25.53%, 100=68.26%, 250=6.21% 00:20:20.187 cpu : usr=40.32%, sys=2.41%, ctx=1282, majf=0, minf=9 00:20:20.187 IO depths : 1=0.1%, 2=0.5%, 4=1.6%, 8=82.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:20.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 issued rwts: total=2401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.187 filename1: (groupid=0, jobs=1): err= 0: pid=82939: Tue Nov 12 10:42:06 2024 00:20:20.187 read: IOPS=233, BW=935KiB/s (958kB/s)(9364KiB/10012msec) 00:20:20.187 slat (usec): min=3, max=8025, avg=19.77, stdev=185.29 00:20:20.187 clat (msec): min=22, max=130, avg=68.32, stdev=20.17 00:20:20.187 lat (msec): min=22, max=130, avg=68.34, stdev=20.16 00:20:20.187 clat percentiles (msec): 00:20:20.187 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:20:20.187 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.187 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 107], 00:20:20.187 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 131], 00:20:20.187 | 99.99th=[ 131] 00:20:20.187 bw ( KiB/s): min= 720, max= 1536, per=4.13%, avg=932.40, stdev=178.35, samples=20 00:20:20.187 iops : min= 180, max= 384, avg=233.10, stdev=44.59, samples=20 00:20:20.187 lat (msec) : 50=24.43%, 100=69.16%, 250=6.41% 00:20:20.187 cpu : usr=34.59%, sys=2.09%, ctx=1074, majf=0, minf=9 00:20:20.187 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=76.1%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:20.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 complete : 0=0.0%, 4=88.8%, 8=9.6%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 issued rwts: total=2341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.187 filename2: (groupid=0, jobs=1): err= 0: pid=82940: Tue Nov 12 10:42:06 2024 00:20:20.187 read: IOPS=219, BW=877KiB/s (898kB/s)(8784KiB/10012msec) 00:20:20.187 slat (usec): min=4, max=8038, avg=32.98, stdev=382.07 00:20:20.187 clat (msec): min=14, max=142, avg=72.74, stdev=19.96 00:20:20.187 lat (msec): min=14, max=142, avg=72.77, stdev=19.97 00:20:20.187 clat percentiles (msec): 00:20:20.187 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 57], 00:20:20.187 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 00:20:20.187 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 108], 00:20:20.187 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:20:20.187 | 99.99th=[ 144] 00:20:20.187 bw ( KiB/s): min= 640, max= 1424, per=3.87%, avg=872.10, stdev=165.22, samples=20 00:20:20.187 iops : min= 160, max= 356, avg=218.00, stdev=41.28, samples=20 00:20:20.187 lat (msec) : 20=0.32%, 50=17.03%, 100=73.82%, 250=8.83% 00:20:20.187 cpu : usr=34.66%, sys=2.01%, ctx=1019, majf=0, minf=9 00:20:20.187 IO depths : 1=0.1%, 2=2.7%, 4=10.7%, 8=72.0%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:20.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 complete : 0=0.0%, 4=90.1%, 8=7.5%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.187 filename2: (groupid=0, jobs=1): err= 0: pid=82941: Tue Nov 12 10:42:06 2024 00:20:20.187 read: IOPS=244, BW=977KiB/s (1000kB/s)(9836KiB/10068msec) 00:20:20.187 slat (usec): min=3, max=8023, avg=18.71, stdev=180.62 00:20:20.187 clat (msec): min=5, max=142, avg=65.35, stdev=23.95 00:20:20.187 lat (msec): min=5, max=142, avg=65.37, stdev=23.95 00:20:20.187 clat percentiles (msec): 00:20:20.187 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 33], 20.00th=[ 48], 00:20:20.187 | 30.00th=[ 51], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:20:20.187 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 109], 00:20:20.187 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 136], 99.95th=[ 144], 00:20:20.187 | 99.99th=[ 144] 00:20:20.187 bw ( KiB/s): min= 680, max= 2136, per=4.34%, avg=978.40, stdev=311.18, samples=20 00:20:20.187 iops : min= 170, max= 534, avg=244.60, stdev=77.80, samples=20 00:20:20.187 lat (msec) : 10=2.52%, 20=1.79%, 50=24.81%, 100=63.44%, 250=7.44% 00:20:20.187 cpu : usr=38.17%, sys=2.24%, ctx=1072, majf=0, minf=9 00:20:20.187 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.1%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:20.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 issued rwts: total=2459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.187 filename2: (groupid=0, jobs=1): err= 0: pid=82942: Tue Nov 12 10:42:06 2024 00:20:20.187 read: IOPS=238, BW=954KiB/s (977kB/s)(9568KiB/10029msec) 00:20:20.187 slat (usec): min=4, max=4026, avg=25.33, stdev=200.68 00:20:20.187 clat (msec): min=22, max=122, avg=66.94, stdev=19.88 00:20:20.187 lat (msec): min=22, max=122, avg=66.96, stdev=19.88 00:20:20.187 clat percentiles (msec): 00:20:20.187 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 44], 20.00th=[ 48], 00:20:20.187 | 30.00th=[ 55], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:20:20.187 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 105], 00:20:20.187 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:20:20.187 | 99.99th=[ 123] 00:20:20.187 bw ( KiB/s): min= 696, max= 1536, per=4.21%, avg=950.40, stdev=185.06, samples=20 00:20:20.187 iops : min= 174, max= 384, avg=237.60, stdev=46.27, samples=20 00:20:20.187 lat (msec) : 50=24.54%, 100=68.85%, 250=6.61% 00:20:20.187 cpu : usr=42.36%, sys=2.39%, ctx=1189, majf=0, minf=9 00:20:20.187 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=78.2%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:20.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 complete : 0=0.0%, 4=88.2%, 8=10.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 issued rwts: total=2392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.187 filename2: (groupid=0, jobs=1): err= 0: pid=82943: Tue Nov 12 10:42:06 2024 00:20:20.187 read: IOPS=246, BW=988KiB/s (1011kB/s)(9884KiB/10009msec) 00:20:20.187 slat (usec): min=3, max=8028, avg=31.69, stdev=365.17 00:20:20.187 clat (msec): min=13, max=131, avg=64.69, stdev=20.14 00:20:20.187 lat (msec): min=13, max=131, avg=64.72, stdev=20.14 00:20:20.187 clat percentiles (msec): 00:20:20.187 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:20:20.187 | 30.00th=[ 50], 40.00th=[ 60], 50.00th=[ 70], 60.00th=[ 72], 00:20:20.187 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 105], 00:20:20.187 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 131], 00:20:20.187 | 99.99th=[ 131] 00:20:20.187 bw ( KiB/s): min= 712, max= 1536, per=4.35%, avg=982.00, stdev=175.71, samples=20 00:20:20.187 iops : min= 178, max= 384, avg=245.50, stdev=43.93, samples=20 00:20:20.187 lat (msec) : 20=0.28%, 50=30.88%, 100=63.42%, 250=5.42% 00:20:20.187 cpu : usr=31.26%, sys=1.95%, ctx=995, majf=0, minf=9 00:20:20.187 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:20.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 issued rwts: total=2471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.187 filename2: (groupid=0, jobs=1): err= 0: pid=82944: Tue Nov 12 10:42:06 2024 00:20:20.187 read: IOPS=217, BW=871KiB/s (892kB/s)(8744KiB/10039msec) 00:20:20.187 slat (usec): min=4, max=6573, avg=18.35, stdev=151.91 00:20:20.187 clat (msec): min=23, max=143, avg=73.37, stdev=22.24 00:20:20.187 lat (msec): min=23, max=143, avg=73.39, stdev=22.24 00:20:20.187 clat percentiles (msec): 00:20:20.187 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 51], 00:20:20.187 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 80], 00:20:20.187 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 112], 00:20:20.187 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 142], 00:20:20.187 | 99.99th=[ 144] 00:20:20.187 bw ( KiB/s): min= 640, max= 1424, per=3.85%, avg=868.00, stdev=191.69, samples=20 00:20:20.187 iops : min= 160, max= 356, avg=217.00, stdev=47.92, samples=20 00:20:20.187 lat (msec) : 50=19.90%, 100=68.76%, 250=11.34% 00:20:20.187 cpu : usr=38.16%, sys=2.56%, ctx=1229, majf=0, minf=9 00:20:20.187 IO depths : 1=0.1%, 2=2.5%, 4=10.1%, 8=72.5%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:20.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 complete : 0=0.0%, 4=90.1%, 8=7.7%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.187 issued rwts: total=2186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.187 filename2: (groupid=0, jobs=1): err= 0: pid=82945: Tue Nov 12 10:42:06 2024 00:20:20.187 read: IOPS=241, BW=964KiB/s (988kB/s)(9680KiB/10037msec) 00:20:20.187 slat (usec): min=3, max=7036, avg=23.68, stdev=220.36 00:20:20.188 clat (msec): min=23, max=129, avg=66.22, stdev=20.13 00:20:20.188 lat (msec): min=23, max=129, avg=66.24, stdev=20.13 00:20:20.188 clat percentiles (msec): 00:20:20.188 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:20:20.188 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:20:20.188 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 91], 95.00th=[ 105], 00:20:20.188 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 130], 99.95th=[ 130], 00:20:20.188 | 99.99th=[ 130] 00:20:20.188 bw ( KiB/s): min= 720, max= 1408, per=4.26%, avg=961.60, stdev=164.50, samples=20 00:20:20.188 iops : min= 180, max= 352, avg=240.40, stdev=41.12, samples=20 00:20:20.188 lat (msec) : 50=25.99%, 100=67.56%, 250=6.45% 00:20:20.188 cpu : usr=41.48%, sys=2.61%, ctx=1631, majf=0, minf=9 00:20:20.188 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=80.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:20.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.188 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.188 issued rwts: total=2420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.188 filename2: (groupid=0, jobs=1): err= 0: pid=82946: Tue Nov 12 10:42:06 2024 00:20:20.188 read: IOPS=245, BW=981KiB/s (1004kB/s)(9836KiB/10028msec) 00:20:20.188 slat (nsec): min=4014, max=38798, avg=14582.18, stdev=4826.83 00:20:20.188 clat (msec): min=22, max=124, avg=65.16, stdev=19.97 00:20:20.188 lat (msec): min=22, max=124, avg=65.17, stdev=19.97 00:20:20.188 clat percentiles (msec): 00:20:20.188 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:20:20.188 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 71], 60.00th=[ 72], 00:20:20.188 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 86], 95.00th=[ 107], 00:20:20.188 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 126], 00:20:20.188 | 99.99th=[ 126] 00:20:20.188 bw ( KiB/s): min= 688, max= 1472, per=4.33%, avg=977.20, stdev=177.97, samples=20 00:20:20.188 iops : min= 172, max= 368, avg=244.30, stdev=44.49, samples=20 00:20:20.188 lat (msec) : 50=32.70%, 100=61.49%, 250=5.82% 00:20:20.188 cpu : usr=31.04%, sys=2.14%, ctx=1030, majf=0, minf=9 00:20:20.188 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:20.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.188 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.188 issued rwts: total=2459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.188 filename2: (groupid=0, jobs=1): err= 0: pid=82947: Tue Nov 12 10:42:06 2024 00:20:20.188 read: IOPS=244, BW=977KiB/s (1000kB/s)(9796KiB/10027msec) 00:20:20.188 slat (usec): min=4, max=8023, avg=24.94, stdev=249.04 00:20:20.188 clat (msec): min=23, max=127, avg=65.38, stdev=19.68 00:20:20.188 lat (msec): min=23, max=127, avg=65.40, stdev=19.68 00:20:20.188 clat percentiles (msec): 00:20:20.188 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:20:20.188 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 69], 60.00th=[ 72], 00:20:20.188 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 106], 00:20:20.188 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 128], 00:20:20.188 | 99.99th=[ 128] 00:20:20.188 bw ( KiB/s): min= 720, max= 1480, per=4.32%, avg=973.20, stdev=159.63, samples=20 00:20:20.188 iops : min= 180, max= 370, avg=243.30, stdev=39.91, samples=20 00:20:20.188 lat (msec) : 50=29.40%, 100=64.84%, 250=5.76% 00:20:20.188 cpu : usr=39.22%, sys=2.11%, ctx=1127, majf=0, minf=9 00:20:20.188 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:20.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.188 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.188 issued rwts: total=2449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:20.188 00:20:20.188 Run status group 0 (all jobs): 00:20:20.188 READ: bw=22.0MiB/s (23.1MB/s), 821KiB/s-990KiB/s (841kB/s-1014kB/s), io=222MiB (233MB), run=10003-10087msec 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.188 10:42:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 bdev_null0 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 [2024-11-12 10:42:07.035715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 bdev_null1 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.188 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.189 { 00:20:20.189 "params": { 00:20:20.189 "name": "Nvme$subsystem", 00:20:20.189 "trtype": "$TEST_TRANSPORT", 00:20:20.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.189 "adrfam": "ipv4", 00:20:20.189 "trsvcid": "$NVMF_PORT", 00:20:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.189 "hdgst": ${hdgst:-false}, 00:20:20.189 "ddgst": ${ddgst:-false} 00:20:20.189 }, 00:20:20.189 "method": "bdev_nvme_attach_controller" 00:20:20.189 } 00:20:20.189 EOF 00:20:20.189 )") 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.189 { 00:20:20.189 "params": { 00:20:20.189 "name": "Nvme$subsystem", 00:20:20.189 "trtype": "$TEST_TRANSPORT", 00:20:20.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.189 "adrfam": "ipv4", 00:20:20.189 "trsvcid": "$NVMF_PORT", 00:20:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.189 "hdgst": ${hdgst:-false}, 00:20:20.189 "ddgst": ${ddgst:-false} 00:20:20.189 }, 00:20:20.189 "method": "bdev_nvme_attach_controller" 00:20:20.189 } 00:20:20.189 EOF 00:20:20.189 )") 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:20.189 "params": { 00:20:20.189 "name": "Nvme0", 00:20:20.189 "trtype": "tcp", 00:20:20.189 "traddr": "10.0.0.3", 00:20:20.189 "adrfam": "ipv4", 00:20:20.189 "trsvcid": "4420", 00:20:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:20.189 "hdgst": false, 00:20:20.189 "ddgst": false 00:20:20.189 }, 00:20:20.189 "method": "bdev_nvme_attach_controller" 00:20:20.189 },{ 00:20:20.189 "params": { 00:20:20.189 "name": "Nvme1", 00:20:20.189 "trtype": "tcp", 00:20:20.189 "traddr": "10.0.0.3", 00:20:20.189 "adrfam": "ipv4", 00:20:20.189 "trsvcid": "4420", 00:20:20.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.189 "hdgst": false, 00:20:20.189 "ddgst": false 00:20:20.189 }, 00:20:20.189 "method": "bdev_nvme_attach_controller" 00:20:20.189 }' 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:20.189 10:42:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:20.189 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:20.189 ... 00:20:20.189 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:20.189 ... 00:20:20.189 fio-3.35 00:20:20.189 Starting 4 threads 00:20:24.378 00:20:24.378 filename0: (groupid=0, jobs=1): err= 0: pid=83090: Tue Nov 12 10:42:12 2024 00:20:24.378 read: IOPS=1963, BW=15.3MiB/s (16.1MB/s)(76.7MiB/5001msec) 00:20:24.378 slat (nsec): min=3143, max=76718, avg=14608.69, stdev=5253.82 00:20:24.378 clat (usec): min=969, max=6540, avg=4017.46, stdev=429.70 00:20:24.378 lat (usec): min=977, max=6550, avg=4032.06, stdev=430.14 00:20:24.378 clat percentiles (usec): 00:20:24.378 | 1.00th=[ 2147], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3884], 00:20:24.378 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:20:24.378 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4359], 95.00th=[ 4490], 00:20:24.378 | 99.00th=[ 5473], 99.50th=[ 5800], 99.90th=[ 6325], 99.95th=[ 6390], 00:20:24.378 | 99.99th=[ 6521] 00:20:24.378 bw ( KiB/s): min=14048, max=17568, per=23.28%, avg=15720.89, stdev=980.88, samples=9 00:20:24.378 iops : min= 1756, max= 2196, avg=1965.11, stdev=122.61, samples=9 00:20:24.378 lat (usec) : 1000=0.02% 00:20:24.378 lat (msec) : 2=0.35%, 4=40.58%, 10=59.05% 00:20:24.378 cpu : usr=91.80%, sys=7.44%, ctx=37, majf=0, minf=9 00:20:24.378 IO depths : 1=0.1%, 2=23.4%, 4=51.0%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.378 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.378 issued rwts: total=9819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.378 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:24.378 filename0: (groupid=0, jobs=1): err= 0: pid=83091: Tue Nov 12 10:42:12 2024 00:20:24.378 read: IOPS=2014, BW=15.7MiB/s (16.5MB/s)(78.8MiB/5003msec) 00:20:24.378 slat (nsec): min=6646, max=68433, avg=13983.93, stdev=5051.99 00:20:24.378 clat (usec): min=897, max=8918, avg=3918.25, stdev=507.83 00:20:24.378 lat (usec): min=906, max=8940, avg=3932.24, stdev=508.57 00:20:24.378 clat percentiles (usec): 00:20:24.378 | 1.00th=[ 1991], 5.00th=[ 2638], 10.00th=[ 3654], 20.00th=[ 3818], 00:20:24.378 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:20:24.378 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4424], 00:20:24.378 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5997], 99.95th=[ 8160], 00:20:24.378 | 99.99th=[ 8160] 00:20:24.378 bw ( KiB/s): min=15216, max=17952, per=23.97%, avg=16184.89, stdev=1051.45, samples=9 00:20:24.378 iops : min= 1902, max= 2244, avg=2023.11, stdev=131.43, samples=9 00:20:24.378 lat (usec) : 1000=0.08% 00:20:24.378 lat (msec) : 2=0.92%, 4=43.30%, 10=55.70% 00:20:24.378 cpu : usr=91.06%, sys=8.12%, ctx=7, majf=0, minf=0 00:20:24.378 IO depths : 1=0.1%, 2=21.2%, 4=52.3%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.378 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.378 issued rwts: total=10081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.378 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:24.378 filename1: (groupid=0, jobs=1): err= 0: pid=83092: Tue Nov 12 10:42:12 2024 00:20:24.378 read: IOPS=1969, BW=15.4MiB/s (16.1MB/s)(76.9MiB/5001msec) 00:20:24.378 slat (usec): min=7, max=594, avg=15.32, stdev= 7.63 00:20:24.378 clat (usec): min=1150, max=8195, avg=4003.83, stdev=384.42 00:20:24.378 lat (usec): min=1173, max=8220, avg=4019.15, stdev=384.74 00:20:24.378 clat percentiles (usec): 00:20:24.378 | 1.00th=[ 2409], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3884], 00:20:24.379 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:20:24.379 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4359], 95.00th=[ 4490], 00:20:24.379 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5407], 99.95th=[ 5473], 00:20:24.379 | 99.99th=[ 8225] 00:20:24.379 bw ( KiB/s): min=15232, max=16688, per=23.35%, avg=15761.67, stdev=468.44, samples=9 00:20:24.379 iops : min= 1904, max= 2086, avg=1970.11, stdev=58.57, samples=9 00:20:24.379 lat (msec) : 2=0.67%, 4=40.75%, 10=58.58% 00:20:24.379 cpu : usr=92.44%, sys=6.72%, ctx=10, majf=0, minf=0 00:20:24.379 IO depths : 1=0.1%, 2=23.3%, 4=51.2%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.379 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.379 issued rwts: total=9848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.379 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:24.379 filename1: (groupid=0, jobs=1): err= 0: pid=83093: Tue Nov 12 10:42:12 2024 00:20:24.379 read: IOPS=2493, BW=19.5MiB/s (20.4MB/s)(97.4MiB/5002msec) 00:20:24.379 slat (nsec): min=3298, max=56875, avg=10930.71, stdev=4530.95 00:20:24.379 clat (usec): min=708, max=6385, avg=3177.44, stdev=1066.80 00:20:24.379 lat (usec): min=716, max=6401, avg=3188.37, stdev=1067.10 00:20:24.379 clat percentiles (usec): 00:20:24.379 | 1.00th=[ 1254], 5.00th=[ 1303], 10.00th=[ 1336], 20.00th=[ 1483], 00:20:24.379 | 30.00th=[ 2900], 40.00th=[ 3195], 50.00th=[ 3687], 60.00th=[ 3785], 00:20:24.379 | 70.00th=[ 3884], 80.00th=[ 3982], 90.00th=[ 4113], 95.00th=[ 4359], 00:20:24.379 | 99.00th=[ 5014], 99.50th=[ 5473], 99.90th=[ 6128], 99.95th=[ 6325], 00:20:24.379 | 99.99th=[ 6390] 00:20:24.379 bw ( KiB/s): min=16256, max=21488, per=29.29%, avg=19774.78, stdev=2156.62, samples=9 00:20:24.379 iops : min= 2032, max= 2686, avg=2471.78, stdev=269.62, samples=9 00:20:24.379 lat (usec) : 750=0.07%, 1000=0.26% 00:20:24.379 lat (msec) : 2=20.84%, 4=61.38%, 10=17.46% 00:20:24.379 cpu : usr=90.56%, sys=8.42%, ctx=7, majf=0, minf=10 00:20:24.379 IO depths : 1=0.1%, 2=4.4%, 4=61.4%, 8=34.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.379 complete : 0=0.0%, 4=98.4%, 8=1.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.379 issued rwts: total=12472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.379 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:24.379 00:20:24.379 Run status group 0 (all jobs): 00:20:24.379 READ: bw=65.9MiB/s (69.1MB/s), 15.3MiB/s-19.5MiB/s (16.1MB/s-20.4MB/s), io=330MiB (346MB), run=5001-5003msec 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.379 10:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.379 10:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.379 10:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:24.379 10:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.379 10:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.379 ************************************ 00:20:24.379 END TEST fio_dif_rand_params 00:20:24.379 ************************************ 00:20:24.379 10:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.379 00:20:24.379 real 0m23.225s 00:20:24.379 user 2m2.327s 00:20:24.379 sys 0m8.952s 00:20:24.379 10:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:24.379 10:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.379 10:42:13 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:24.379 10:42:13 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:24.379 10:42:13 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:24.379 10:42:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:24.379 ************************************ 00:20:24.379 START TEST fio_dif_digest 00:20:24.379 ************************************ 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:24.379 bdev_null0 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:24.379 [2024-11-12 10:42:13.097639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:24.379 { 00:20:24.379 "params": { 00:20:24.379 "name": "Nvme$subsystem", 00:20:24.379 "trtype": "$TEST_TRANSPORT", 00:20:24.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.379 "adrfam": "ipv4", 00:20:24.379 "trsvcid": "$NVMF_PORT", 00:20:24.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.379 "hdgst": ${hdgst:-false}, 00:20:24.379 "ddgst": ${ddgst:-false} 00:20:24.379 }, 00:20:24.379 "method": "bdev_nvme_attach_controller" 00:20:24.379 } 00:20:24.379 EOF 00:20:24.379 )") 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:24.379 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.380 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:24.380 10:42:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:24.380 10:42:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:20:24.380 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:20:24.380 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.380 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:24.380 10:42:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:20:24.380 10:42:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:24.380 "params": { 00:20:24.380 "name": "Nvme0", 00:20:24.380 "trtype": "tcp", 00:20:24.380 "traddr": "10.0.0.3", 00:20:24.380 "adrfam": "ipv4", 00:20:24.380 "trsvcid": "4420", 00:20:24.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:24.380 "hdgst": true, 00:20:24.380 "ddgst": true 00:20:24.380 }, 00:20:24.380 "method": "bdev_nvme_attach_controller" 00:20:24.380 }' 00:20:24.656 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:24.656 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:24.656 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.656 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.656 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:24.656 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:24.656 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:24.656 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:24.656 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:24.656 10:42:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.656 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:24.656 ... 00:20:24.656 fio-3.35 00:20:24.656 Starting 3 threads 00:20:36.862 00:20:36.862 filename0: (groupid=0, jobs=1): err= 0: pid=83199: Tue Nov 12 10:42:23 2024 00:20:36.862 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(312MiB/10001msec) 00:20:36.862 slat (nsec): min=6766, max=44174, avg=9751.82, stdev=4100.01 00:20:36.862 clat (usec): min=11492, max=14844, avg=12009.82, stdev=513.41 00:20:36.862 lat (usec): min=11499, max=14858, avg=12019.57, stdev=513.82 00:20:36.862 clat percentiles (usec): 00:20:36.862 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11600], 20.00th=[11731], 00:20:36.862 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:20:36.862 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12780], 95.00th=[13304], 00:20:36.862 | 99.00th=[13698], 99.50th=[13698], 99.90th=[14877], 99.95th=[14877], 00:20:36.862 | 99.99th=[14877] 00:20:36.862 bw ( KiB/s): min=29952, max=33024, per=33.35%, avg=31932.63, stdev=738.23, samples=19 00:20:36.862 iops : min= 234, max= 258, avg=249.47, stdev= 5.77, samples=19 00:20:36.862 lat (msec) : 20=100.00% 00:20:36.862 cpu : usr=91.81%, sys=7.67%, ctx=17, majf=0, minf=0 00:20:36.862 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.862 issued rwts: total=2493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.862 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:36.862 filename0: (groupid=0, jobs=1): err= 0: pid=83200: Tue Nov 12 10:42:23 2024 00:20:36.862 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(312MiB/10005msec) 00:20:36.862 slat (nsec): min=6920, max=51021, avg=14212.31, stdev=3705.72 00:20:36.862 clat (usec): min=8177, max=14282, avg=11992.92, stdev=540.39 00:20:36.862 lat (usec): min=8189, max=14325, avg=12007.13, stdev=540.81 00:20:36.862 clat percentiles (usec): 00:20:36.862 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11600], 20.00th=[11600], 00:20:36.862 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:20:36.862 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12780], 95.00th=[13304], 00:20:36.862 | 99.00th=[13698], 99.50th=[13698], 99.90th=[14222], 99.95th=[14222], 00:20:36.862 | 99.99th=[14222] 00:20:36.862 bw ( KiB/s): min=29952, max=33024, per=33.35%, avg=31932.63, stdev=692.42, samples=19 00:20:36.862 iops : min= 234, max= 258, avg=249.47, stdev= 5.41, samples=19 00:20:36.862 lat (msec) : 10=0.24%, 20=99.76% 00:20:36.862 cpu : usr=91.58%, sys=7.88%, ctx=6, majf=0, minf=0 00:20:36.862 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.862 issued rwts: total=2496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.862 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:36.862 filename0: (groupid=0, jobs=1): err= 0: pid=83201: Tue Nov 12 10:42:23 2024 00:20:36.862 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(312MiB/10005msec) 00:20:36.862 slat (nsec): min=7154, max=70692, avg=13704.22, stdev=3685.88 00:20:36.862 clat (usec): min=8181, max=14281, avg=11994.70, stdev=540.02 00:20:36.862 lat (usec): min=8194, max=14297, avg=12008.40, stdev=540.67 00:20:36.862 clat percentiles (usec): 00:20:36.862 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11600], 20.00th=[11600], 00:20:36.862 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:20:36.862 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12780], 95.00th=[13304], 00:20:36.862 | 99.00th=[13566], 99.50th=[13698], 99.90th=[14222], 99.95th=[14222], 00:20:36.862 | 99.99th=[14222] 00:20:36.862 bw ( KiB/s): min=29952, max=33024, per=33.35%, avg=31932.63, stdev=692.42, samples=19 00:20:36.862 iops : min= 234, max= 258, avg=249.47, stdev= 5.41, samples=19 00:20:36.862 lat (msec) : 10=0.24%, 20=99.76% 00:20:36.862 cpu : usr=91.82%, sys=7.62%, ctx=72, majf=0, minf=0 00:20:36.862 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.862 issued rwts: total=2496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.862 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:36.862 00:20:36.862 Run status group 0 (all jobs): 00:20:36.862 READ: bw=93.5MiB/s (98.1MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=936MiB (981MB), run=10001-10005msec 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:36.862 ************************************ 00:20:36.862 END TEST fio_dif_digest 00:20:36.862 ************************************ 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.862 00:20:36.862 real 0m10.913s 00:20:36.862 user 0m28.122s 00:20:36.862 sys 0m2.547s 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:36.862 10:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:36.862 10:42:24 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:36.862 10:42:24 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:36.862 10:42:24 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:36.862 10:42:24 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:20:36.862 10:42:24 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.862 10:42:24 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:20:36.862 10:42:24 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.862 10:42:24 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.862 rmmod nvme_tcp 00:20:36.862 rmmod nvme_fabrics 00:20:36.862 rmmod nvme_keyring 00:20:36.862 10:42:24 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.862 10:42:24 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:20:36.862 10:42:24 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:20:36.862 10:42:24 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82457 ']' 00:20:36.862 10:42:24 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82457 00:20:36.862 10:42:24 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 82457 ']' 00:20:36.862 10:42:24 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 82457 00:20:36.862 10:42:24 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:20:36.862 10:42:24 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.862 10:42:24 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82457 00:20:36.862 killing process with pid 82457 00:20:36.862 10:42:24 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:36.862 10:42:24 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:36.863 10:42:24 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82457' 00:20:36.863 10:42:24 nvmf_dif -- common/autotest_common.sh@971 -- # kill 82457 00:20:36.863 10:42:24 nvmf_dif -- common/autotest_common.sh@976 -- # wait 82457 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:36.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:36.863 Waiting for block devices as requested 00:20:36.863 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:36.863 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:36.863 10:42:24 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:36.863 10:42:25 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:36.863 10:42:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.863 10:42:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:36.863 10:42:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.863 10:42:25 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:20:36.863 00:20:36.863 real 0m58.776s 00:20:36.863 user 3m45.955s 00:20:36.863 sys 0m19.847s 00:20:36.863 ************************************ 00:20:36.863 END TEST nvmf_dif 00:20:36.863 ************************************ 00:20:36.863 10:42:25 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:36.863 10:42:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:36.863 10:42:25 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:36.863 10:42:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:36.863 10:42:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:36.863 10:42:25 -- common/autotest_common.sh@10 -- # set +x 00:20:36.863 ************************************ 00:20:36.863 START TEST nvmf_abort_qd_sizes 00:20:36.863 ************************************ 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:36.863 * Looking for test storage... 00:20:36.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.863 --rc genhtml_branch_coverage=1 00:20:36.863 --rc genhtml_function_coverage=1 00:20:36.863 --rc genhtml_legend=1 00:20:36.863 --rc geninfo_all_blocks=1 00:20:36.863 --rc geninfo_unexecuted_blocks=1 00:20:36.863 00:20:36.863 ' 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.863 --rc genhtml_branch_coverage=1 00:20:36.863 --rc genhtml_function_coverage=1 00:20:36.863 --rc genhtml_legend=1 00:20:36.863 --rc geninfo_all_blocks=1 00:20:36.863 --rc geninfo_unexecuted_blocks=1 00:20:36.863 00:20:36.863 ' 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.863 --rc genhtml_branch_coverage=1 00:20:36.863 --rc genhtml_function_coverage=1 00:20:36.863 --rc genhtml_legend=1 00:20:36.863 --rc geninfo_all_blocks=1 00:20:36.863 --rc geninfo_unexecuted_blocks=1 00:20:36.863 00:20:36.863 ' 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.863 --rc genhtml_branch_coverage=1 00:20:36.863 --rc genhtml_function_coverage=1 00:20:36.863 --rc genhtml_legend=1 00:20:36.863 --rc geninfo_all_blocks=1 00:20:36.863 --rc geninfo_unexecuted_blocks=1 00:20:36.863 00:20:36.863 ' 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.863 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.864 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:36.864 Cannot find device "nvmf_init_br" 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:36.864 Cannot find device "nvmf_init_br2" 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:36.864 Cannot find device "nvmf_tgt_br" 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:36.864 Cannot find device "nvmf_tgt_br2" 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:36.864 Cannot find device "nvmf_init_br" 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:36.864 Cannot find device "nvmf_init_br2" 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:36.864 Cannot find device "nvmf_tgt_br" 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:36.864 Cannot find device "nvmf_tgt_br2" 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:36.864 Cannot find device "nvmf_br" 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:36.864 Cannot find device "nvmf_init_if" 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:36.864 Cannot find device "nvmf_init_if2" 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:36.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:36.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:36.864 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:36.865 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:36.865 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:36.865 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:36.865 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:36.865 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:36.865 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:36.865 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:36.865 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:36.865 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:36.865 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:36.865 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:37.124 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:37.124 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:20:37.124 00:20:37.124 --- 10.0.0.3 ping statistics --- 00:20:37.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.124 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:37.124 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:37.124 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:20:37.124 00:20:37.124 --- 10.0.0.4 ping statistics --- 00:20:37.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.124 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:37.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:37.124 00:20:37.124 --- 10.0.0.1 ping statistics --- 00:20:37.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.124 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:37.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:20:37.124 00:20:37.124 --- 10.0.0.2 ping statistics --- 00:20:37.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.124 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:37.124 10:42:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:37.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.692 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.951 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=83844 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 83844 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 83844 ']' 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:37.952 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:37.952 [2024-11-12 10:42:26.621524] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:20:37.952 [2024-11-12 10:42:26.621617] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.211 [2024-11-12 10:42:26.775106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.211 [2024-11-12 10:42:26.816742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.211 [2024-11-12 10:42:26.816801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.211 [2024-11-12 10:42:26.816816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.211 [2024-11-12 10:42:26.816826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.211 [2024-11-12 10:42:26.816835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.211 [2024-11-12 10:42:26.817812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.211 [2024-11-12 10:42:26.817901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.211 [2024-11-12 10:42:26.818042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.211 [2024-11-12 10:42:26.818048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.211 [2024-11-12 10:42:26.854992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:38.211 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:38.471 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:38.472 10:42:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:38.472 ************************************ 00:20:38.472 START TEST spdk_target_abort 00:20:38.472 ************************************ 00:20:38.472 10:42:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:20:38.472 10:42:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:38.472 10:42:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:38.472 10:42:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.472 10:42:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:38.472 spdk_targetn1 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:38.472 [2024-11-12 10:42:27.070517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:38.472 [2024-11-12 10:42:27.109543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:38.472 10:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:41.766 Initializing NVMe Controllers 00:20:41.766 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:41.766 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:41.766 Initialization complete. Launching workers. 00:20:41.766 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10084, failed: 0 00:20:41.766 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1031, failed to submit 9053 00:20:41.766 success 778, unsuccessful 253, failed 0 00:20:41.766 10:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:41.766 10:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:45.053 Initializing NVMe Controllers 00:20:45.053 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:45.053 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:45.053 Initialization complete. Launching workers. 00:20:45.053 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8922, failed: 0 00:20:45.053 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1171, failed to submit 7751 00:20:45.053 success 392, unsuccessful 779, failed 0 00:20:45.053 10:42:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:45.053 10:42:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:48.340 Initializing NVMe Controllers 00:20:48.340 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:48.340 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:48.340 Initialization complete. Launching workers. 00:20:48.340 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31293, failed: 0 00:20:48.340 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2280, failed to submit 29013 00:20:48.340 success 442, unsuccessful 1838, failed 0 00:20:48.340 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:48.340 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.340 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:48.340 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.340 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:48.340 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.340 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:48.908 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.908 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83844 00:20:48.908 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 83844 ']' 00:20:48.908 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 83844 00:20:48.908 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:20:48.908 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:48.908 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83844 00:20:48.908 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:48.908 killing process with pid 83844 00:20:48.908 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:48.908 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83844' 00:20:48.908 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 83844 00:20:48.908 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 83844 00:20:49.167 00:20:49.167 real 0m10.676s 00:20:49.167 user 0m41.034s 00:20:49.167 sys 0m2.054s 00:20:49.167 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:49.167 10:42:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:49.167 ************************************ 00:20:49.167 END TEST spdk_target_abort 00:20:49.167 ************************************ 00:20:49.167 10:42:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:49.167 10:42:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:49.167 10:42:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:49.168 10:42:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:49.168 ************************************ 00:20:49.168 START TEST kernel_target_abort 00:20:49.168 ************************************ 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:49.168 10:42:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:49.427 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:49.427 Waiting for block devices as requested 00:20:49.427 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:49.686 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:49.686 No valid GPT data, bailing 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:49.686 No valid GPT data, bailing 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:49.686 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:49.945 No valid GPT data, bailing 00:20:49.945 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:49.945 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:49.945 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:49.945 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:49.946 No valid GPT data, bailing 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 --hostid=96df7a2d-651c-49c0-b1c8-dd965eb48096 -a 10.0.0.1 -t tcp -s 4420 00:20:49.946 00:20:49.946 Discovery Log Number of Records 2, Generation counter 2 00:20:49.946 =====Discovery Log Entry 0====== 00:20:49.946 trtype: tcp 00:20:49.946 adrfam: ipv4 00:20:49.946 subtype: current discovery subsystem 00:20:49.946 treq: not specified, sq flow control disable supported 00:20:49.946 portid: 1 00:20:49.946 trsvcid: 4420 00:20:49.946 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:49.946 traddr: 10.0.0.1 00:20:49.946 eflags: none 00:20:49.946 sectype: none 00:20:49.946 =====Discovery Log Entry 1====== 00:20:49.946 trtype: tcp 00:20:49.946 adrfam: ipv4 00:20:49.946 subtype: nvme subsystem 00:20:49.946 treq: not specified, sq flow control disable supported 00:20:49.946 portid: 1 00:20:49.946 trsvcid: 4420 00:20:49.946 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:49.946 traddr: 10.0.0.1 00:20:49.946 eflags: none 00:20:49.946 sectype: none 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:49.946 10:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:53.242 Initializing NVMe Controllers 00:20:53.242 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:53.242 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:53.242 Initialization complete. Launching workers. 00:20:53.242 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32262, failed: 0 00:20:53.242 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32262, failed to submit 0 00:20:53.242 success 0, unsuccessful 32262, failed 0 00:20:53.242 10:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:53.242 10:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:56.531 Initializing NVMe Controllers 00:20:56.531 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:56.531 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:56.531 Initialization complete. Launching workers. 00:20:56.531 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63512, failed: 0 00:20:56.531 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25711, failed to submit 37801 00:20:56.531 success 0, unsuccessful 25711, failed 0 00:20:56.531 10:42:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:56.531 10:42:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:59.823 Initializing NVMe Controllers 00:20:59.823 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:59.823 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:59.823 Initialization complete. Launching workers. 00:20:59.823 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67700, failed: 0 00:20:59.823 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16878, failed to submit 50822 00:20:59.823 success 0, unsuccessful 16878, failed 0 00:20:59.823 10:42:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:59.823 10:42:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:59.823 10:42:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:20:59.823 10:42:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:59.823 10:42:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:59.823 10:42:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:59.823 10:42:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:59.823 10:42:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:59.823 10:42:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:59.823 10:42:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:00.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:01.768 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:01.768 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:01.768 00:21:01.768 real 0m12.565s 00:21:01.768 user 0m5.660s 00:21:01.768 sys 0m4.288s 00:21:01.768 10:42:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:01.768 ************************************ 00:21:01.768 10:42:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:01.769 END TEST kernel_target_abort 00:21:01.769 ************************************ 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.769 rmmod nvme_tcp 00:21:01.769 rmmod nvme_fabrics 00:21:01.769 rmmod nvme_keyring 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 83844 ']' 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 83844 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 83844 ']' 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 83844 00:21:01.769 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (83844) - No such process 00:21:01.769 Process with pid 83844 is not found 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 83844 is not found' 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:01.769 10:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:02.028 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:02.286 Waiting for block devices as requested 00:21:02.286 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.286 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.286 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:02.286 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:02.286 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:02.286 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:21:02.286 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:21:02.286 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:02.286 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.286 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:02.286 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:02.545 00:21:02.545 real 0m26.200s 00:21:02.545 user 0m47.848s 00:21:02.545 sys 0m7.751s 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:02.545 10:42:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:02.545 ************************************ 00:21:02.545 END TEST nvmf_abort_qd_sizes 00:21:02.545 ************************************ 00:21:02.805 10:42:51 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:02.805 10:42:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:02.805 10:42:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:02.805 10:42:51 -- common/autotest_common.sh@10 -- # set +x 00:21:02.805 ************************************ 00:21:02.805 START TEST keyring_file 00:21:02.805 ************************************ 00:21:02.805 10:42:51 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:02.805 * Looking for test storage... 00:21:02.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:02.805 10:42:51 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:02.805 10:42:51 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:21:02.805 10:42:51 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:02.805 10:42:51 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:02.805 10:42:51 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.805 10:42:51 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:02.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.805 --rc genhtml_branch_coverage=1 00:21:02.805 --rc genhtml_function_coverage=1 00:21:02.805 --rc genhtml_legend=1 00:21:02.805 --rc geninfo_all_blocks=1 00:21:02.805 --rc geninfo_unexecuted_blocks=1 00:21:02.805 00:21:02.805 ' 00:21:02.805 10:42:51 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:02.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.805 --rc genhtml_branch_coverage=1 00:21:02.805 --rc genhtml_function_coverage=1 00:21:02.805 --rc genhtml_legend=1 00:21:02.805 --rc geninfo_all_blocks=1 00:21:02.805 --rc geninfo_unexecuted_blocks=1 00:21:02.805 00:21:02.805 ' 00:21:02.805 10:42:51 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:02.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.805 --rc genhtml_branch_coverage=1 00:21:02.805 --rc genhtml_function_coverage=1 00:21:02.805 --rc genhtml_legend=1 00:21:02.805 --rc geninfo_all_blocks=1 00:21:02.805 --rc geninfo_unexecuted_blocks=1 00:21:02.805 00:21:02.805 ' 00:21:02.805 10:42:51 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:02.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.805 --rc genhtml_branch_coverage=1 00:21:02.805 --rc genhtml_function_coverage=1 00:21:02.805 --rc genhtml_legend=1 00:21:02.805 --rc geninfo_all_blocks=1 00:21:02.805 --rc geninfo_unexecuted_blocks=1 00:21:02.805 00:21:02.805 ' 00:21:02.805 10:42:51 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:02.805 10:42:51 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.805 10:42:51 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:02.805 10:42:51 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:03.065 10:42:51 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.065 10:42:51 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.065 10:42:51 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.065 10:42:51 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.065 10:42:51 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.065 10:42:51 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.065 10:42:51 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:03.065 10:42:51 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:03.065 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:03.065 10:42:51 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:03.065 10:42:51 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:03.065 10:42:51 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:03.065 10:42:51 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:03.065 10:42:51 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:03.065 10:42:51 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FEeNmszvzQ 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FEeNmszvzQ 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FEeNmszvzQ 00:21:03.065 10:42:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FEeNmszvzQ 00:21:03.065 10:42:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.32GJkyzGSe 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:03.065 10:42:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.32GJkyzGSe 00:21:03.065 10:42:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.32GJkyzGSe 00:21:03.065 10:42:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.32GJkyzGSe 00:21:03.065 10:42:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=84748 00:21:03.065 10:42:51 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.065 10:42:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84748 00:21:03.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.065 10:42:51 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 84748 ']' 00:21:03.065 10:42:51 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.065 10:42:51 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:03.065 10:42:51 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.065 10:42:51 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:03.065 10:42:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:03.065 [2024-11-12 10:42:51.764365] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:21:03.065 [2024-11-12 10:42:51.764661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84748 ] 00:21:03.324 [2024-11-12 10:42:51.915991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.324 [2024-11-12 10:42:51.956298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.324 [2024-11-12 10:42:52.004395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:03.583 10:42:52 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:03.583 10:42:52 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:21:03.583 10:42:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:03.583 10:42:52 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.583 10:42:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:03.583 [2024-11-12 10:42:52.158271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.583 null0 00:21:03.583 [2024-11-12 10:42:52.190237] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.583 [2024-11-12 10:42:52.190590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.584 10:42:52 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:03.584 [2024-11-12 10:42:52.222168] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:03.584 request: 00:21:03.584 { 00:21:03.584 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.584 "secure_channel": false, 00:21:03.584 "listen_address": { 00:21:03.584 "trtype": "tcp", 00:21:03.584 "traddr": "127.0.0.1", 00:21:03.584 "trsvcid": "4420" 00:21:03.584 }, 00:21:03.584 "method": "nvmf_subsystem_add_listener", 00:21:03.584 "req_id": 1 00:21:03.584 } 00:21:03.584 Got JSON-RPC error response 00:21:03.584 response: 00:21:03.584 { 00:21:03.584 "code": -32602, 00:21:03.584 "message": "Invalid parameters" 00:21:03.584 } 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:03.584 10:42:52 keyring_file -- keyring/file.sh@47 -- # bperfpid=84758 00:21:03.584 10:42:52 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:03.584 10:42:52 keyring_file -- keyring/file.sh@49 -- # waitforlisten 84758 /var/tmp/bperf.sock 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 84758 ']' 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:03.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:03.584 10:42:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:03.584 [2024-11-12 10:42:52.289056] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:21:03.584 [2024-11-12 10:42:52.289351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84758 ] 00:21:03.843 [2024-11-12 10:42:52.442403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.843 [2024-11-12 10:42:52.481625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.843 [2024-11-12 10:42:52.515771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:03.843 10:42:52 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:03.843 10:42:52 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:21:03.843 10:42:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FEeNmszvzQ 00:21:03.843 10:42:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FEeNmszvzQ 00:21:04.102 10:42:52 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.32GJkyzGSe 00:21:04.102 10:42:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.32GJkyzGSe 00:21:04.360 10:42:53 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:04.360 10:42:53 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:04.360 10:42:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:04.360 10:42:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:04.360 10:42:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:04.619 10:42:53 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FEeNmszvzQ == \/\t\m\p\/\t\m\p\.\F\E\e\N\m\s\z\v\z\Q ]] 00:21:04.619 10:42:53 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:04.619 10:42:53 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:04.619 10:42:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:04.619 10:42:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:04.619 10:42:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:04.878 10:42:53 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.32GJkyzGSe == \/\t\m\p\/\t\m\p\.\3\2\G\J\k\y\z\G\S\e ]] 00:21:04.878 10:42:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:04.878 10:42:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:04.878 10:42:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:04.878 10:42:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:04.878 10:42:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:04.878 10:42:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:05.137 10:42:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:05.137 10:42:53 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:05.137 10:42:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:05.137 10:42:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:05.137 10:42:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:05.137 10:42:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:05.137 10:42:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:05.704 10:42:54 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:05.704 10:42:54 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:05.704 10:42:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:05.704 [2024-11-12 10:42:54.401623] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.963 nvme0n1 00:21:05.963 10:42:54 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:05.963 10:42:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:05.963 10:42:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:05.963 10:42:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:05.963 10:42:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:05.963 10:42:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:06.222 10:42:54 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:06.222 10:42:54 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:06.222 10:42:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:06.222 10:42:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:06.222 10:42:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:06.222 10:42:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:06.222 10:42:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:06.480 10:42:55 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:06.480 10:42:55 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:06.480 Running I/O for 1 seconds... 00:21:07.415 13569.00 IOPS, 53.00 MiB/s 00:21:07.415 Latency(us) 00:21:07.415 [2024-11-12T10:42:56.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.415 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:07.415 nvme0n1 : 1.01 13619.05 53.20 0.00 0.00 9374.79 4110.89 19779.96 00:21:07.415 [2024-11-12T10:42:56.173Z] =================================================================================================================== 00:21:07.415 [2024-11-12T10:42:56.173Z] Total : 13619.05 53.20 0.00 0.00 9374.79 4110.89 19779.96 00:21:07.415 { 00:21:07.415 "results": [ 00:21:07.415 { 00:21:07.415 "job": "nvme0n1", 00:21:07.415 "core_mask": "0x2", 00:21:07.415 "workload": "randrw", 00:21:07.415 "percentage": 50, 00:21:07.415 "status": "finished", 00:21:07.415 "queue_depth": 128, 00:21:07.415 "io_size": 4096, 00:21:07.415 "runtime": 1.005797, 00:21:07.415 "iops": 13619.050365033898, 00:21:07.415 "mibps": 53.19941548841366, 00:21:07.415 "io_failed": 0, 00:21:07.415 "io_timeout": 0, 00:21:07.415 "avg_latency_us": 9374.78870704416, 00:21:07.415 "min_latency_us": 4110.894545454546, 00:21:07.415 "max_latency_us": 19779.956363636364 00:21:07.415 } 00:21:07.415 ], 00:21:07.415 "core_count": 1 00:21:07.415 } 00:21:07.415 10:42:56 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:07.415 10:42:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:07.674 10:42:56 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:07.674 10:42:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:07.674 10:42:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:07.674 10:42:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.674 10:42:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:07.674 10:42:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.241 10:42:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:08.241 10:42:56 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:08.241 10:42:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:08.241 10:42:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.241 10:42:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.241 10:42:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.241 10:42:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:08.241 10:42:56 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:08.241 10:42:56 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:08.241 10:42:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:08.241 10:42:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:08.241 10:42:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:08.241 10:42:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.241 10:42:56 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:08.241 10:42:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.241 10:42:56 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:08.241 10:42:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:08.500 [2024-11-12 10:42:57.194652] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:08.500 [2024-11-12 10:42:57.195339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa13770 (107): Transport endpoint is not connected 00:21:08.500 [2024-11-12 10:42:57.196328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa13770 (9): Bad file descriptor 00:21:08.500 [2024-11-12 10:42:57.197326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:08.500 [2024-11-12 10:42:57.197349] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:08.500 [2024-11-12 10:42:57.197359] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:08.500 [2024-11-12 10:42:57.197369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:08.500 request: 00:21:08.500 { 00:21:08.500 "name": "nvme0", 00:21:08.500 "trtype": "tcp", 00:21:08.500 "traddr": "127.0.0.1", 00:21:08.500 "adrfam": "ipv4", 00:21:08.500 "trsvcid": "4420", 00:21:08.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:08.500 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:08.500 "prchk_reftag": false, 00:21:08.500 "prchk_guard": false, 00:21:08.500 "hdgst": false, 00:21:08.500 "ddgst": false, 00:21:08.500 "psk": "key1", 00:21:08.500 "allow_unrecognized_csi": false, 00:21:08.500 "method": "bdev_nvme_attach_controller", 00:21:08.500 "req_id": 1 00:21:08.500 } 00:21:08.500 Got JSON-RPC error response 00:21:08.500 response: 00:21:08.500 { 00:21:08.500 "code": -5, 00:21:08.500 "message": "Input/output error" 00:21:08.500 } 00:21:08.500 10:42:57 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:08.500 10:42:57 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.500 10:42:57 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.500 10:42:57 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.500 10:42:57 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:08.500 10:42:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:08.500 10:42:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.500 10:42:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.500 10:42:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:08.500 10:42:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.759 10:42:57 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:08.759 10:42:57 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:08.759 10:42:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:08.759 10:42:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.759 10:42:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:08.759 10:42:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.759 10:42:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:09.018 10:42:57 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:09.018 10:42:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:09.018 10:42:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:09.277 10:42:57 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:09.277 10:42:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:09.535 10:42:58 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:09.535 10:42:58 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:09.536 10:42:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:09.794 10:42:58 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:09.795 10:42:58 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.FEeNmszvzQ 00:21:09.795 10:42:58 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FEeNmszvzQ 00:21:09.795 10:42:58 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:09.795 10:42:58 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FEeNmszvzQ 00:21:09.795 10:42:58 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:09.795 10:42:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.795 10:42:58 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:09.795 10:42:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.795 10:42:58 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FEeNmszvzQ 00:21:09.795 10:42:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FEeNmszvzQ 00:21:10.053 [2024-11-12 10:42:58.734759] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FEeNmszvzQ': 0100660 00:21:10.053 [2024-11-12 10:42:58.734795] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:10.053 request: 00:21:10.053 { 00:21:10.053 "name": "key0", 00:21:10.053 "path": "/tmp/tmp.FEeNmszvzQ", 00:21:10.053 "method": "keyring_file_add_key", 00:21:10.053 "req_id": 1 00:21:10.053 } 00:21:10.053 Got JSON-RPC error response 00:21:10.053 response: 00:21:10.053 { 00:21:10.053 "code": -1, 00:21:10.053 "message": "Operation not permitted" 00:21:10.054 } 00:21:10.054 10:42:58 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:10.054 10:42:58 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:10.054 10:42:58 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:10.054 10:42:58 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:10.054 10:42:58 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.FEeNmszvzQ 00:21:10.054 10:42:58 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FEeNmszvzQ 00:21:10.054 10:42:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FEeNmszvzQ 00:21:10.313 10:42:58 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.FEeNmszvzQ 00:21:10.313 10:42:58 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:10.313 10:42:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:10.313 10:42:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:10.313 10:42:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:10.313 10:42:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:10.313 10:42:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:10.572 10:42:59 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:10.572 10:42:59 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:10.572 10:42:59 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:10.572 10:42:59 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:10.572 10:42:59 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:10.572 10:42:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.572 10:42:59 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:10.572 10:42:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.572 10:42:59 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:10.572 10:42:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:10.831 [2024-11-12 10:42:59.434935] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FEeNmszvzQ': No such file or directory 00:21:10.831 [2024-11-12 10:42:59.434971] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:10.831 [2024-11-12 10:42:59.435005] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:10.831 [2024-11-12 10:42:59.435012] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:10.831 [2024-11-12 10:42:59.435020] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:10.831 [2024-11-12 10:42:59.435027] bdev_nvme.c:6667:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:10.831 request: 00:21:10.831 { 00:21:10.831 "name": "nvme0", 00:21:10.831 "trtype": "tcp", 00:21:10.831 "traddr": "127.0.0.1", 00:21:10.831 "adrfam": "ipv4", 00:21:10.831 "trsvcid": "4420", 00:21:10.831 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:10.831 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:10.831 "prchk_reftag": false, 00:21:10.831 "prchk_guard": false, 00:21:10.831 "hdgst": false, 00:21:10.831 "ddgst": false, 00:21:10.831 "psk": "key0", 00:21:10.831 "allow_unrecognized_csi": false, 00:21:10.831 "method": "bdev_nvme_attach_controller", 00:21:10.831 "req_id": 1 00:21:10.831 } 00:21:10.831 Got JSON-RPC error response 00:21:10.831 response: 00:21:10.831 { 00:21:10.831 "code": -19, 00:21:10.831 "message": "No such device" 00:21:10.831 } 00:21:10.831 10:42:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:10.831 10:42:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:10.831 10:42:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:10.831 10:42:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:10.831 10:42:59 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:10.831 10:42:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:11.089 10:42:59 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:11.089 10:42:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:11.089 10:42:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:11.089 10:42:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:11.089 10:42:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:11.089 10:42:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:11.089 10:42:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sn6Lmcsmtz 00:21:11.089 10:42:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:11.089 10:42:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:11.089 10:42:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:11.089 10:42:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:11.089 10:42:59 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:11.089 10:42:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:11.089 10:42:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:11.089 10:42:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sn6Lmcsmtz 00:21:11.089 10:42:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sn6Lmcsmtz 00:21:11.089 10:42:59 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.sn6Lmcsmtz 00:21:11.089 10:42:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sn6Lmcsmtz 00:21:11.089 10:42:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sn6Lmcsmtz 00:21:11.348 10:42:59 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:11.348 10:42:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:11.606 nvme0n1 00:21:11.606 10:43:00 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:11.606 10:43:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:11.606 10:43:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.606 10:43:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:11.606 10:43:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.606 10:43:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.865 10:43:00 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:11.865 10:43:00 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:11.865 10:43:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:12.123 10:43:00 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:12.123 10:43:00 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:12.123 10:43:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:12.123 10:43:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.123 10:43:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:12.382 10:43:00 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:12.382 10:43:00 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:12.382 10:43:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:12.382 10:43:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:12.382 10:43:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:12.382 10:43:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.382 10:43:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:12.641 10:43:01 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:12.641 10:43:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:12.641 10:43:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:12.900 10:43:01 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:12.900 10:43:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.900 10:43:01 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:13.159 10:43:01 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:13.159 10:43:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sn6Lmcsmtz 00:21:13.159 10:43:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sn6Lmcsmtz 00:21:13.418 10:43:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.32GJkyzGSe 00:21:13.418 10:43:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.32GJkyzGSe 00:21:13.418 10:43:02 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.418 10:43:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.986 nvme0n1 00:21:13.986 10:43:02 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:13.986 10:43:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:14.245 10:43:02 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:14.245 "subsystems": [ 00:21:14.245 { 00:21:14.245 "subsystem": "keyring", 00:21:14.245 "config": [ 00:21:14.245 { 00:21:14.245 "method": "keyring_file_add_key", 00:21:14.245 "params": { 00:21:14.245 "name": "key0", 00:21:14.245 "path": "/tmp/tmp.sn6Lmcsmtz" 00:21:14.245 } 00:21:14.245 }, 00:21:14.245 { 00:21:14.246 "method": "keyring_file_add_key", 00:21:14.246 "params": { 00:21:14.246 "name": "key1", 00:21:14.246 "path": "/tmp/tmp.32GJkyzGSe" 00:21:14.246 } 00:21:14.246 } 00:21:14.246 ] 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "subsystem": "iobuf", 00:21:14.246 "config": [ 00:21:14.246 { 00:21:14.246 "method": "iobuf_set_options", 00:21:14.246 "params": { 00:21:14.246 "small_pool_count": 8192, 00:21:14.246 "large_pool_count": 1024, 00:21:14.246 "small_bufsize": 8192, 00:21:14.246 "large_bufsize": 135168, 00:21:14.246 "enable_numa": false 00:21:14.246 } 00:21:14.246 } 00:21:14.246 ] 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "subsystem": "sock", 00:21:14.246 "config": [ 00:21:14.246 { 00:21:14.246 "method": "sock_set_default_impl", 00:21:14.246 "params": { 00:21:14.246 "impl_name": "uring" 00:21:14.246 } 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "method": "sock_impl_set_options", 00:21:14.246 "params": { 00:21:14.246 "impl_name": "ssl", 00:21:14.246 "recv_buf_size": 4096, 00:21:14.246 "send_buf_size": 4096, 00:21:14.246 "enable_recv_pipe": true, 00:21:14.246 "enable_quickack": false, 00:21:14.246 "enable_placement_id": 0, 00:21:14.246 "enable_zerocopy_send_server": true, 00:21:14.246 "enable_zerocopy_send_client": false, 00:21:14.246 "zerocopy_threshold": 0, 00:21:14.246 "tls_version": 0, 00:21:14.246 "enable_ktls": false 00:21:14.246 } 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "method": "sock_impl_set_options", 00:21:14.246 "params": { 00:21:14.246 "impl_name": "posix", 00:21:14.246 "recv_buf_size": 2097152, 00:21:14.246 "send_buf_size": 2097152, 00:21:14.246 "enable_recv_pipe": true, 00:21:14.246 "enable_quickack": false, 00:21:14.246 "enable_placement_id": 0, 00:21:14.246 "enable_zerocopy_send_server": true, 00:21:14.246 "enable_zerocopy_send_client": false, 00:21:14.246 "zerocopy_threshold": 0, 00:21:14.246 "tls_version": 0, 00:21:14.246 "enable_ktls": false 00:21:14.246 } 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "method": "sock_impl_set_options", 00:21:14.246 "params": { 00:21:14.246 "impl_name": "uring", 00:21:14.246 "recv_buf_size": 2097152, 00:21:14.246 "send_buf_size": 2097152, 00:21:14.246 "enable_recv_pipe": true, 00:21:14.246 "enable_quickack": false, 00:21:14.246 "enable_placement_id": 0, 00:21:14.246 "enable_zerocopy_send_server": false, 00:21:14.246 "enable_zerocopy_send_client": false, 00:21:14.246 "zerocopy_threshold": 0, 00:21:14.246 "tls_version": 0, 00:21:14.246 "enable_ktls": false 00:21:14.246 } 00:21:14.246 } 00:21:14.246 ] 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "subsystem": "vmd", 00:21:14.246 "config": [] 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "subsystem": "accel", 00:21:14.246 "config": [ 00:21:14.246 { 00:21:14.246 "method": "accel_set_options", 00:21:14.246 "params": { 00:21:14.246 "small_cache_size": 128, 00:21:14.246 "large_cache_size": 16, 00:21:14.246 "task_count": 2048, 00:21:14.246 "sequence_count": 2048, 00:21:14.246 "buf_count": 2048 00:21:14.246 } 00:21:14.246 } 00:21:14.246 ] 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "subsystem": "bdev", 00:21:14.246 "config": [ 00:21:14.246 { 00:21:14.246 "method": "bdev_set_options", 00:21:14.246 "params": { 00:21:14.246 "bdev_io_pool_size": 65535, 00:21:14.246 "bdev_io_cache_size": 256, 00:21:14.246 "bdev_auto_examine": true, 00:21:14.246 "iobuf_small_cache_size": 128, 00:21:14.246 "iobuf_large_cache_size": 16 00:21:14.246 } 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "method": "bdev_raid_set_options", 00:21:14.246 "params": { 00:21:14.246 "process_window_size_kb": 1024, 00:21:14.246 "process_max_bandwidth_mb_sec": 0 00:21:14.246 } 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "method": "bdev_iscsi_set_options", 00:21:14.246 "params": { 00:21:14.246 "timeout_sec": 30 00:21:14.246 } 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "method": "bdev_nvme_set_options", 00:21:14.246 "params": { 00:21:14.246 "action_on_timeout": "none", 00:21:14.246 "timeout_us": 0, 00:21:14.246 "timeout_admin_us": 0, 00:21:14.246 "keep_alive_timeout_ms": 10000, 00:21:14.246 "arbitration_burst": 0, 00:21:14.246 "low_priority_weight": 0, 00:21:14.246 "medium_priority_weight": 0, 00:21:14.246 "high_priority_weight": 0, 00:21:14.246 "nvme_adminq_poll_period_us": 10000, 00:21:14.246 "nvme_ioq_poll_period_us": 0, 00:21:14.246 "io_queue_requests": 512, 00:21:14.246 "delay_cmd_submit": true, 00:21:14.246 "transport_retry_count": 4, 00:21:14.246 "bdev_retry_count": 3, 00:21:14.246 "transport_ack_timeout": 0, 00:21:14.246 "ctrlr_loss_timeout_sec": 0, 00:21:14.246 "reconnect_delay_sec": 0, 00:21:14.246 "fast_io_fail_timeout_sec": 0, 00:21:14.246 "disable_auto_failback": false, 00:21:14.246 "generate_uuids": false, 00:21:14.246 "transport_tos": 0, 00:21:14.246 "nvme_error_stat": false, 00:21:14.246 "rdma_srq_size": 0, 00:21:14.246 "io_path_stat": false, 00:21:14.246 "allow_accel_sequence": false, 00:21:14.246 "rdma_max_cq_size": 0, 00:21:14.246 "rdma_cm_event_timeout_ms": 0, 00:21:14.246 "dhchap_digests": [ 00:21:14.246 "sha256", 00:21:14.246 "sha384", 00:21:14.246 "sha512" 00:21:14.246 ], 00:21:14.246 "dhchap_dhgroups": [ 00:21:14.246 "null", 00:21:14.246 "ffdhe2048", 00:21:14.246 "ffdhe3072", 00:21:14.246 "ffdhe4096", 00:21:14.246 "ffdhe6144", 00:21:14.246 "ffdhe8192" 00:21:14.246 ] 00:21:14.246 } 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "method": "bdev_nvme_attach_controller", 00:21:14.246 "params": { 00:21:14.246 "name": "nvme0", 00:21:14.246 "trtype": "TCP", 00:21:14.246 "adrfam": "IPv4", 00:21:14.246 "traddr": "127.0.0.1", 00:21:14.246 "trsvcid": "4420", 00:21:14.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.246 "prchk_reftag": false, 00:21:14.246 "prchk_guard": false, 00:21:14.246 "ctrlr_loss_timeout_sec": 0, 00:21:14.246 "reconnect_delay_sec": 0, 00:21:14.246 "fast_io_fail_timeout_sec": 0, 00:21:14.246 "psk": "key0", 00:21:14.246 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:14.246 "hdgst": false, 00:21:14.246 "ddgst": false, 00:21:14.246 "multipath": "multipath" 00:21:14.246 } 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "method": "bdev_nvme_set_hotplug", 00:21:14.246 "params": { 00:21:14.246 "period_us": 100000, 00:21:14.246 "enable": false 00:21:14.246 } 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "method": "bdev_wait_for_examine" 00:21:14.246 } 00:21:14.246 ] 00:21:14.246 }, 00:21:14.246 { 00:21:14.246 "subsystem": "nbd", 00:21:14.246 "config": [] 00:21:14.246 } 00:21:14.246 ] 00:21:14.246 }' 00:21:14.246 10:43:02 keyring_file -- keyring/file.sh@115 -- # killprocess 84758 00:21:14.246 10:43:02 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 84758 ']' 00:21:14.246 10:43:02 keyring_file -- common/autotest_common.sh@956 -- # kill -0 84758 00:21:14.246 10:43:02 keyring_file -- common/autotest_common.sh@957 -- # uname 00:21:14.246 10:43:02 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:14.246 10:43:02 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84758 00:21:14.246 killing process with pid 84758 00:21:14.246 Received shutdown signal, test time was about 1.000000 seconds 00:21:14.246 00:21:14.246 Latency(us) 00:21:14.246 [2024-11-12T10:43:03.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.246 [2024-11-12T10:43:03.004Z] =================================================================================================================== 00:21:14.247 [2024-11-12T10:43:03.005Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.247 10:43:02 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:14.247 10:43:02 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:14.247 10:43:02 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84758' 00:21:14.247 10:43:02 keyring_file -- common/autotest_common.sh@971 -- # kill 84758 00:21:14.247 10:43:02 keyring_file -- common/autotest_common.sh@976 -- # wait 84758 00:21:14.247 10:43:02 keyring_file -- keyring/file.sh@118 -- # bperfpid=84996 00:21:14.247 10:43:02 keyring_file -- keyring/file.sh@120 -- # waitforlisten 84996 /var/tmp/bperf.sock 00:21:14.247 10:43:02 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 84996 ']' 00:21:14.247 10:43:02 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:14.247 10:43:02 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:14.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:14.247 10:43:02 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:14.247 10:43:02 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:14.247 10:43:02 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:14.247 "subsystems": [ 00:21:14.247 { 00:21:14.247 "subsystem": "keyring", 00:21:14.247 "config": [ 00:21:14.247 { 00:21:14.247 "method": "keyring_file_add_key", 00:21:14.247 "params": { 00:21:14.247 "name": "key0", 00:21:14.247 "path": "/tmp/tmp.sn6Lmcsmtz" 00:21:14.247 } 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "method": "keyring_file_add_key", 00:21:14.247 "params": { 00:21:14.247 "name": "key1", 00:21:14.247 "path": "/tmp/tmp.32GJkyzGSe" 00:21:14.247 } 00:21:14.247 } 00:21:14.247 ] 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "subsystem": "iobuf", 00:21:14.247 "config": [ 00:21:14.247 { 00:21:14.247 "method": "iobuf_set_options", 00:21:14.247 "params": { 00:21:14.247 "small_pool_count": 8192, 00:21:14.247 "large_pool_count": 1024, 00:21:14.247 "small_bufsize": 8192, 00:21:14.247 "large_bufsize": 135168, 00:21:14.247 "enable_numa": false 00:21:14.247 } 00:21:14.247 } 00:21:14.247 ] 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "subsystem": "sock", 00:21:14.247 "config": [ 00:21:14.247 { 00:21:14.247 "method": "sock_set_default_impl", 00:21:14.247 "params": { 00:21:14.247 "impl_name": "uring" 00:21:14.247 } 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "method": "sock_impl_set_options", 00:21:14.247 "params": { 00:21:14.247 "impl_name": "ssl", 00:21:14.247 "recv_buf_size": 4096, 00:21:14.247 "send_buf_size": 4096, 00:21:14.247 "enable_recv_pipe": true, 00:21:14.247 "enable_quickack": false, 00:21:14.247 "enable_placement_id": 0, 00:21:14.247 "enable_zerocopy_send_server": true, 00:21:14.247 "enable_zerocopy_send_client": false, 00:21:14.247 "zerocopy_threshold": 0, 00:21:14.247 "tls_version": 0, 00:21:14.247 "enable_ktls": false 00:21:14.247 } 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "method": "sock_impl_set_options", 00:21:14.247 "params": { 00:21:14.247 "impl_name": "posix", 00:21:14.247 "recv_buf_size": 2097152, 00:21:14.247 "send_buf_size": 2097152, 00:21:14.247 "enable_recv_pipe": true, 00:21:14.247 "enable_quickack": false, 00:21:14.247 "enable_placement_id": 0, 00:21:14.247 "enable_zerocopy_send_server": true, 00:21:14.247 "enable_zerocopy_send_client": false, 00:21:14.247 "zerocopy_threshold": 0, 00:21:14.247 "tls_version": 0, 00:21:14.247 "enable_ktls": false 00:21:14.247 } 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "method": "sock_impl_set_options", 00:21:14.247 "params": { 00:21:14.247 "impl_name": "uring", 00:21:14.247 "recv_buf_size": 2097152, 00:21:14.247 "send_buf_size": 2097152, 00:21:14.247 "enable_recv_pipe": true, 00:21:14.247 "enable_quickack": false, 00:21:14.247 "enable_placement_id": 0, 00:21:14.247 "enable_zerocopy_send_server": false, 00:21:14.247 "enable_zerocopy_send_client": false, 00:21:14.247 "zerocopy_threshold": 0, 00:21:14.247 "tls_version": 0, 00:21:14.247 "enable_ktls": false 00:21:14.247 } 00:21:14.247 } 00:21:14.247 ] 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "subsystem": "vmd", 00:21:14.247 "config": [] 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "subsystem": "accel", 00:21:14.247 "config": [ 00:21:14.247 { 00:21:14.247 "method": "accel_set_options", 00:21:14.247 "params": { 00:21:14.247 "small_cache_size": 128, 00:21:14.247 "large_cache_size": 16, 00:21:14.247 "task_count": 2048, 00:21:14.247 "sequence_count": 2048, 00:21:14.247 "buf_count": 2048 00:21:14.247 } 00:21:14.247 } 00:21:14.247 ] 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "subsystem": "bdev", 00:21:14.247 "config": [ 00:21:14.247 { 00:21:14.247 "method": "bdev_set_options", 00:21:14.247 "params": { 00:21:14.247 "bdev_io_pool_size": 65535, 00:21:14.247 "bdev_io_cache_size": 256, 00:21:14.247 "bdev_auto_examine": true, 00:21:14.247 "iobuf_small_cache_size": 128, 00:21:14.247 "iobuf_large_cache_size": 16 00:21:14.247 } 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "method": "bdev_raid_set_options", 00:21:14.247 "params": { 00:21:14.247 "process_window_size_kb": 1024, 00:21:14.247 "process_max_bandwidth_mb_sec": 0 00:21:14.247 } 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "method": "bdev_iscsi_set_options", 00:21:14.247 "params": { 00:21:14.247 "timeout_sec": 30 00:21:14.247 } 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "method": "bdev_nvme_set_options", 00:21:14.247 "params": { 00:21:14.247 "action_on_timeout": "none", 00:21:14.247 "timeout_us": 0, 00:21:14.247 "timeout_admin_us": 0, 00:21:14.247 "keep_alive_timeout_ms": 10000, 00:21:14.247 "arbitration_burst": 0, 00:21:14.247 "low_priority_weight": 0, 00:21:14.247 "medium_priority_weight": 0, 00:21:14.247 "high_priority_weight": 0, 00:21:14.247 "nvme_adminq_poll_period_us": 10000, 00:21:14.247 "nvme_ioq_poll_period_us": 0, 00:21:14.247 "io_queue_requests": 512, 00:21:14.247 "delay_cmd_submit": true, 00:21:14.247 "transport_retry_count": 4, 00:21:14.247 "bdev_retry_count": 3, 00:21:14.247 "transport_ack_timeout": 0, 00:21:14.247 "ctrlr_loss_timeout_sec": 0, 00:21:14.247 "reconnect_delay_sec": 0, 00:21:14.247 "fast_io_fail_timeout_sec": 0, 00:21:14.247 "disable_auto_failback": false, 00:21:14.247 "generate_uuids": false, 00:21:14.247 "transport_tos": 0, 00:21:14.247 "nvme_error_stat": false, 00:21:14.247 "rdma_srq_size": 0, 00:21:14.247 "io_path_stat": false, 00:21:14.247 "allow_accel_sequence": false, 00:21:14.247 "rdma_max_cq_size": 0, 00:21:14.247 "rdma_cm_event_timeout_ms": 0, 00:21:14.247 "dhchap_digests": [ 00:21:14.247 "sha256", 00:21:14.247 "sha384", 00:21:14.247 "sha512" 00:21:14.247 ], 00:21:14.247 "dhchap_dhgroups": [ 00:21:14.247 "null", 00:21:14.247 "ffdhe2048", 00:21:14.247 "ffdhe3072", 00:21:14.247 "ffdhe4096", 00:21:14.247 "ffdhe6144", 00:21:14.247 "ffdhe8192" 00:21:14.247 ] 00:21:14.247 } 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "method": "bdev_nvme_attach_controller", 00:21:14.247 "params": { 00:21:14.247 "name": "nvme0", 00:21:14.247 "trtype": "TCP", 00:21:14.247 "adrfam": "IPv4", 00:21:14.247 "traddr": "127.0.0.1", 00:21:14.247 "trsvcid": "4420", 00:21:14.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.247 "prchk_reftag": false, 00:21:14.247 "prchk_guard": false, 00:21:14.247 "ctrlr_loss_timeout_sec": 0, 00:21:14.247 "reconnect_delay_sec": 0, 00:21:14.247 "fast_io_fail_timeout_sec": 0, 00:21:14.247 "psk": "key0", 00:21:14.247 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:14.247 "hdgst": false, 00:21:14.247 "ddgst": false, 00:21:14.247 "multipath": "multipath" 00:21:14.247 } 00:21:14.247 }, 00:21:14.247 { 00:21:14.247 "method": "bdev_nvme_set_hotplug", 00:21:14.247 "params": { 00:21:14.247 "period_us": 100000, 00:21:14.247 "enable": false 00:21:14.247 } 00:21:14.247 }, 00:21:14.248 { 00:21:14.248 "method": "bdev_wait_for_examine" 00:21:14.248 } 00:21:14.248 ] 00:21:14.248 }, 00:21:14.248 { 00:21:14.248 "subsystem": "nbd", 00:21:14.248 "config": [] 00:21:14.248 } 00:21:14.248 ] 00:21:14.248 }' 00:21:14.248 10:43:02 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:14.248 10:43:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:14.248 [2024-11-12 10:43:03.000990] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:21:14.248 [2024-11-12 10:43:03.001110] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84996 ] 00:21:14.507 [2024-11-12 10:43:03.137764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.507 [2024-11-12 10:43:03.166247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.766 [2024-11-12 10:43:03.276559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:14.766 [2024-11-12 10:43:03.315332] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.333 10:43:03 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:15.333 10:43:03 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:21:15.333 10:43:03 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:15.333 10:43:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.333 10:43:03 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:15.592 10:43:04 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:15.592 10:43:04 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:15.592 10:43:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:15.592 10:43:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:15.592 10:43:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:15.592 10:43:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.592 10:43:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:15.851 10:43:04 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:15.851 10:43:04 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:15.851 10:43:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:15.851 10:43:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:15.851 10:43:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:15.851 10:43:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.851 10:43:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:16.109 10:43:04 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:16.109 10:43:04 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:16.109 10:43:04 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:16.109 10:43:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:16.369 10:43:04 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:16.369 10:43:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:16.369 10:43:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.sn6Lmcsmtz /tmp/tmp.32GJkyzGSe 00:21:16.369 10:43:04 keyring_file -- keyring/file.sh@20 -- # killprocess 84996 00:21:16.369 10:43:04 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 84996 ']' 00:21:16.369 10:43:04 keyring_file -- common/autotest_common.sh@956 -- # kill -0 84996 00:21:16.369 10:43:04 keyring_file -- common/autotest_common.sh@957 -- # uname 00:21:16.369 10:43:04 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.369 10:43:05 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84996 00:21:16.369 killing process with pid 84996 00:21:16.369 Received shutdown signal, test time was about 1.000000 seconds 00:21:16.369 00:21:16.369 Latency(us) 00:21:16.369 [2024-11-12T10:43:05.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.369 [2024-11-12T10:43:05.127Z] =================================================================================================================== 00:21:16.369 [2024-11-12T10:43:05.127Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:16.369 10:43:05 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:16.369 10:43:05 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:16.369 10:43:05 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84996' 00:21:16.369 10:43:05 keyring_file -- common/autotest_common.sh@971 -- # kill 84996 00:21:16.369 10:43:05 keyring_file -- common/autotest_common.sh@976 -- # wait 84996 00:21:16.628 10:43:05 keyring_file -- keyring/file.sh@21 -- # killprocess 84748 00:21:16.628 10:43:05 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 84748 ']' 00:21:16.628 10:43:05 keyring_file -- common/autotest_common.sh@956 -- # kill -0 84748 00:21:16.628 10:43:05 keyring_file -- common/autotest_common.sh@957 -- # uname 00:21:16.628 10:43:05 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.628 10:43:05 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84748 00:21:16.628 killing process with pid 84748 00:21:16.628 10:43:05 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:16.628 10:43:05 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:16.628 10:43:05 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84748' 00:21:16.628 10:43:05 keyring_file -- common/autotest_common.sh@971 -- # kill 84748 00:21:16.628 10:43:05 keyring_file -- common/autotest_common.sh@976 -- # wait 84748 00:21:16.888 00:21:16.888 real 0m14.046s 00:21:16.888 user 0m36.374s 00:21:16.888 sys 0m2.572s 00:21:16.888 ************************************ 00:21:16.888 END TEST keyring_file 00:21:16.888 ************************************ 00:21:16.888 10:43:05 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:16.888 10:43:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:16.888 10:43:05 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:21:16.888 10:43:05 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:16.888 10:43:05 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:16.888 10:43:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:16.888 10:43:05 -- common/autotest_common.sh@10 -- # set +x 00:21:16.888 ************************************ 00:21:16.888 START TEST keyring_linux 00:21:16.888 ************************************ 00:21:16.888 10:43:05 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:16.888 Joined session keyring: 793096277 00:21:16.888 * Looking for test storage... 00:21:16.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:16.888 10:43:05 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:16.888 10:43:05 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:21:16.888 10:43:05 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:16.888 10:43:05 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:16.888 10:43:05 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.889 10:43:05 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:16.889 10:43:05 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:16.889 10:43:05 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.889 10:43:05 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:16.889 10:43:05 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.889 10:43:05 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.889 10:43:05 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.889 10:43:05 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:16.889 10:43:05 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.889 10:43:05 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:16.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.889 --rc genhtml_branch_coverage=1 00:21:16.889 --rc genhtml_function_coverage=1 00:21:16.889 --rc genhtml_legend=1 00:21:16.889 --rc geninfo_all_blocks=1 00:21:16.889 --rc geninfo_unexecuted_blocks=1 00:21:16.889 00:21:16.889 ' 00:21:16.889 10:43:05 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:16.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.889 --rc genhtml_branch_coverage=1 00:21:16.889 --rc genhtml_function_coverage=1 00:21:16.889 --rc genhtml_legend=1 00:21:16.889 --rc geninfo_all_blocks=1 00:21:16.889 --rc geninfo_unexecuted_blocks=1 00:21:16.889 00:21:16.889 ' 00:21:16.889 10:43:05 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:16.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.889 --rc genhtml_branch_coverage=1 00:21:16.889 --rc genhtml_function_coverage=1 00:21:16.889 --rc genhtml_legend=1 00:21:16.889 --rc geninfo_all_blocks=1 00:21:16.889 --rc geninfo_unexecuted_blocks=1 00:21:16.889 00:21:16.889 ' 00:21:16.889 10:43:05 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:16.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.889 --rc genhtml_branch_coverage=1 00:21:16.889 --rc genhtml_function_coverage=1 00:21:16.889 --rc genhtml_legend=1 00:21:16.889 --rc geninfo_all_blocks=1 00:21:16.889 --rc geninfo_unexecuted_blocks=1 00:21:16.889 00:21:16.889 ' 00:21:16.889 10:43:05 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:16.889 10:43:05 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:96df7a2d-651c-49c0-b1c8-dd965eb48096 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=96df7a2d-651c-49c0-b1c8-dd965eb48096 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:16.889 10:43:05 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.889 10:43:05 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.889 10:43:05 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.889 10:43:05 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.889 10:43:05 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.889 10:43:05 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.889 10:43:05 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.889 10:43:05 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:16.889 10:43:05 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.889 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.889 10:43:05 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:17.149 10:43:05 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:17.149 10:43:05 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:17.149 10:43:05 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:17.149 10:43:05 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:17.149 10:43:05 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:17.149 10:43:05 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:17.149 10:43:05 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:17.149 10:43:05 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:17.149 10:43:05 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:17.149 10:43:05 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:17.149 10:43:05 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:17.149 10:43:05 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:17.149 /tmp/:spdk-test:key0 00:21:17.149 10:43:05 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:17.149 10:43:05 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:17.149 10:43:05 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:17.149 10:43:05 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:17.149 10:43:05 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:17.149 10:43:05 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:17.149 10:43:05 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:17.149 10:43:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:17.149 /tmp/:spdk-test:key1 00:21:17.149 10:43:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85119 00:21:17.149 10:43:05 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:17.149 10:43:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85119 00:21:17.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.149 10:43:05 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85119 ']' 00:21:17.149 10:43:05 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.149 10:43:05 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:17.149 10:43:05 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.149 10:43:05 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:17.149 10:43:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:17.149 [2024-11-12 10:43:05.822236] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:21:17.149 [2024-11-12 10:43:05.822528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85119 ] 00:21:17.408 [2024-11-12 10:43:05.967497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.408 [2024-11-12 10:43:05.996323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.408 [2024-11-12 10:43:06.034756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:17.408 10:43:06 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:17.408 10:43:06 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:21:17.408 10:43:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:17.408 10:43:06 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.408 10:43:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:17.408 [2024-11-12 10:43:06.163464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.667 null0 00:21:17.667 [2024-11-12 10:43:06.195411] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.667 [2024-11-12 10:43:06.195622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:17.667 10:43:06 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.667 10:43:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:17.667 298588648 00:21:17.667 10:43:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:17.667 875967290 00:21:17.667 10:43:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85128 00:21:17.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:17.667 10:43:06 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:17.667 10:43:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85128 /var/tmp/bperf.sock 00:21:17.667 10:43:06 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85128 ']' 00:21:17.667 10:43:06 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:17.667 10:43:06 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:17.667 10:43:06 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:17.667 10:43:06 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:17.667 10:43:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:17.667 [2024-11-12 10:43:06.269420] Starting SPDK v25.01-pre git sha1 eba7e4aea / DPDK 24.03.0 initialization... 00:21:17.667 [2024-11-12 10:43:06.269665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85128 ] 00:21:17.667 [2024-11-12 10:43:06.407971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.926 [2024-11-12 10:43:06.438616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.926 10:43:06 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:17.926 10:43:06 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:21:17.926 10:43:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:17.926 10:43:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:18.185 10:43:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:18.185 10:43:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:18.444 [2024-11-12 10:43:06.967237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:18.444 10:43:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:18.444 10:43:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:18.703 [2024-11-12 10:43:07.208920] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.703 nvme0n1 00:21:18.703 10:43:07 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:18.703 10:43:07 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:18.703 10:43:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:18.703 10:43:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:18.703 10:43:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:18.703 10:43:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:18.962 10:43:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:18.962 10:43:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:18.962 10:43:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:18.962 10:43:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:18.962 10:43:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:18.962 10:43:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:18.962 10:43:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:19.221 10:43:07 keyring_linux -- keyring/linux.sh@25 -- # sn=298588648 00:21:19.221 10:43:07 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:19.221 10:43:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:19.221 10:43:07 keyring_linux -- keyring/linux.sh@26 -- # [[ 298588648 == \2\9\8\5\8\8\6\4\8 ]] 00:21:19.221 10:43:07 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 298588648 00:21:19.221 10:43:07 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:19.221 10:43:07 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:19.221 Running I/O for 1 seconds... 00:21:20.598 14878.00 IOPS, 58.12 MiB/s 00:21:20.598 Latency(us) 00:21:20.598 [2024-11-12T10:43:09.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.598 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:20.598 nvme0n1 : 1.01 14892.86 58.18 0.00 0.00 8558.42 6345.08 17277.67 00:21:20.598 [2024-11-12T10:43:09.356Z] =================================================================================================================== 00:21:20.598 [2024-11-12T10:43:09.356Z] Total : 14892.86 58.18 0.00 0.00 8558.42 6345.08 17277.67 00:21:20.598 { 00:21:20.598 "results": [ 00:21:20.598 { 00:21:20.598 "job": "nvme0n1", 00:21:20.598 "core_mask": "0x2", 00:21:20.598 "workload": "randread", 00:21:20.598 "status": "finished", 00:21:20.598 "queue_depth": 128, 00:21:20.598 "io_size": 4096, 00:21:20.598 "runtime": 1.007664, 00:21:20.598 "iops": 14892.861112434304, 00:21:20.598 "mibps": 58.1752387204465, 00:21:20.598 "io_failed": 0, 00:21:20.598 "io_timeout": 0, 00:21:20.598 "avg_latency_us": 8558.416537252311, 00:21:20.598 "min_latency_us": 6345.076363636364, 00:21:20.598 "max_latency_us": 17277.672727272726 00:21:20.598 } 00:21:20.598 ], 00:21:20.598 "core_count": 1 00:21:20.598 } 00:21:20.598 10:43:08 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:20.598 10:43:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:20.598 10:43:09 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:20.598 10:43:09 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:20.598 10:43:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:20.598 10:43:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:20.598 10:43:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:20.598 10:43:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:20.858 10:43:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:20.858 10:43:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:20.858 10:43:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:20.858 10:43:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:20.858 10:43:09 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:21:20.858 10:43:09 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:20.858 10:43:09 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:20.858 10:43:09 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.858 10:43:09 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:20.858 10:43:09 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.858 10:43:09 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:20.858 10:43:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:21.117 [2024-11-12 10:43:09.758676] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:21.117 [2024-11-12 10:43:09.759630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24aa5d0 (107): Transport endpoint is not connected 00:21:21.117 [2024-11-12 10:43:09.760605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24aa5d0 (9): Bad file descriptor 00:21:21.117 [2024-11-12 10:43:09.761600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:21.117 [2024-11-12 10:43:09.761623] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:21.117 [2024-11-12 10:43:09.761649] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:21.117 [2024-11-12 10:43:09.761659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:21.117 request: 00:21:21.117 { 00:21:21.117 "name": "nvme0", 00:21:21.117 "trtype": "tcp", 00:21:21.117 "traddr": "127.0.0.1", 00:21:21.117 "adrfam": "ipv4", 00:21:21.117 "trsvcid": "4420", 00:21:21.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:21.117 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:21.117 "prchk_reftag": false, 00:21:21.117 "prchk_guard": false, 00:21:21.117 "hdgst": false, 00:21:21.117 "ddgst": false, 00:21:21.117 "psk": ":spdk-test:key1", 00:21:21.117 "allow_unrecognized_csi": false, 00:21:21.117 "method": "bdev_nvme_attach_controller", 00:21:21.117 "req_id": 1 00:21:21.117 } 00:21:21.117 Got JSON-RPC error response 00:21:21.117 response: 00:21:21.117 { 00:21:21.117 "code": -5, 00:21:21.117 "message": "Input/output error" 00:21:21.117 } 00:21:21.117 10:43:09 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:21:21.117 10:43:09 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:21.117 10:43:09 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:21.118 10:43:09 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@33 -- # sn=298588648 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 298588648 00:21:21.118 1 links removed 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@33 -- # sn=875967290 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 875967290 00:21:21.118 1 links removed 00:21:21.118 10:43:09 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85128 00:21:21.118 10:43:09 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85128 ']' 00:21:21.118 10:43:09 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85128 00:21:21.118 10:43:09 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:21:21.118 10:43:09 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:21.118 10:43:09 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85128 00:21:21.118 killing process with pid 85128 00:21:21.118 Received shutdown signal, test time was about 1.000000 seconds 00:21:21.118 00:21:21.118 Latency(us) 00:21:21.118 [2024-11-12T10:43:09.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.118 [2024-11-12T10:43:09.876Z] =================================================================================================================== 00:21:21.118 [2024-11-12T10:43:09.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.118 10:43:09 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:21.118 10:43:09 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:21.118 10:43:09 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85128' 00:21:21.118 10:43:09 keyring_linux -- common/autotest_common.sh@971 -- # kill 85128 00:21:21.118 10:43:09 keyring_linux -- common/autotest_common.sh@976 -- # wait 85128 00:21:21.377 10:43:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85119 00:21:21.377 10:43:09 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85119 ']' 00:21:21.377 10:43:09 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85119 00:21:21.377 10:43:09 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:21:21.377 10:43:09 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:21.377 10:43:09 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85119 00:21:21.377 killing process with pid 85119 00:21:21.377 10:43:09 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:21.377 10:43:09 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:21.377 10:43:09 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85119' 00:21:21.377 10:43:09 keyring_linux -- common/autotest_common.sh@971 -- # kill 85119 00:21:21.377 10:43:09 keyring_linux -- common/autotest_common.sh@976 -- # wait 85119 00:21:21.637 00:21:21.637 real 0m4.747s 00:21:21.637 user 0m9.669s 00:21:21.637 sys 0m1.290s 00:21:21.637 10:43:10 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:21.637 ************************************ 00:21:21.637 END TEST keyring_linux 00:21:21.637 ************************************ 00:21:21.637 10:43:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:21.637 10:43:10 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:21:21.637 10:43:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:21.637 10:43:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:21.637 10:43:10 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:21.637 10:43:10 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:21.637 10:43:10 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:21:21.637 10:43:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:21.637 10:43:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:21.637 10:43:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:21.637 10:43:10 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:21:21.637 10:43:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:21.637 10:43:10 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:21:21.637 10:43:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:21.637 10:43:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:21.637 10:43:10 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:21:21.637 10:43:10 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:21:21.637 10:43:10 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:21:21.637 10:43:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:21.637 10:43:10 -- common/autotest_common.sh@10 -- # set +x 00:21:21.637 10:43:10 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:21:21.637 10:43:10 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:21:21.637 10:43:10 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:21:21.637 10:43:10 -- common/autotest_common.sh@10 -- # set +x 00:21:23.542 INFO: APP EXITING 00:21:23.542 INFO: killing all VMs 00:21:23.542 INFO: killing vhost app 00:21:23.542 INFO: EXIT DONE 00:21:24.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:24.110 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:24.110 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:25.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:25.064 Cleaning 00:21:25.064 Removing: /var/run/dpdk/spdk0/config 00:21:25.064 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:25.064 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:25.064 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:25.064 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:25.064 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:25.064 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:25.064 Removing: /var/run/dpdk/spdk1/config 00:21:25.064 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:25.064 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:25.064 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:25.064 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:25.064 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:25.064 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:25.064 Removing: /var/run/dpdk/spdk2/config 00:21:25.064 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:25.064 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:25.064 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:25.064 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:25.064 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:25.064 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:25.064 Removing: /var/run/dpdk/spdk3/config 00:21:25.064 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:25.064 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:25.065 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:25.065 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:25.065 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:25.065 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:25.065 Removing: /var/run/dpdk/spdk4/config 00:21:25.065 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:25.065 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:25.065 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:25.065 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:25.065 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:25.065 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:25.065 Removing: /dev/shm/nvmf_trace.0 00:21:25.065 Removing: /dev/shm/spdk_tgt_trace.pid56717 00:21:25.065 Removing: /var/run/dpdk/spdk0 00:21:25.065 Removing: /var/run/dpdk/spdk1 00:21:25.065 Removing: /var/run/dpdk/spdk2 00:21:25.065 Removing: /var/run/dpdk/spdk3 00:21:25.065 Removing: /var/run/dpdk/spdk4 00:21:25.065 Removing: /var/run/dpdk/spdk_pid56569 00:21:25.065 Removing: /var/run/dpdk/spdk_pid56717 00:21:25.065 Removing: /var/run/dpdk/spdk_pid56910 00:21:25.065 Removing: /var/run/dpdk/spdk_pid56996 00:21:25.065 Removing: /var/run/dpdk/spdk_pid57011 00:21:25.065 Removing: /var/run/dpdk/spdk_pid57120 00:21:25.065 Removing: /var/run/dpdk/spdk_pid57131 00:21:25.065 Removing: /var/run/dpdk/spdk_pid57265 00:21:25.065 Removing: /var/run/dpdk/spdk_pid57460 00:21:25.065 Removing: /var/run/dpdk/spdk_pid57609 00:21:25.065 Removing: /var/run/dpdk/spdk_pid57687 00:21:25.065 Removing: /var/run/dpdk/spdk_pid57758 00:21:25.065 Removing: /var/run/dpdk/spdk_pid57844 00:21:25.065 Removing: /var/run/dpdk/spdk_pid57916 00:21:25.065 Removing: /var/run/dpdk/spdk_pid57954 00:21:25.065 Removing: /var/run/dpdk/spdk_pid57990 00:21:25.065 Removing: /var/run/dpdk/spdk_pid58054 00:21:25.065 Removing: /var/run/dpdk/spdk_pid58135 00:21:25.065 Removing: /var/run/dpdk/spdk_pid58568 00:21:25.065 Removing: /var/run/dpdk/spdk_pid58620 00:21:25.065 Removing: /var/run/dpdk/spdk_pid58658 00:21:25.066 Removing: /var/run/dpdk/spdk_pid58667 00:21:25.066 Removing: /var/run/dpdk/spdk_pid58715 00:21:25.066 Removing: /var/run/dpdk/spdk_pid58729 00:21:25.066 Removing: /var/run/dpdk/spdk_pid58785 00:21:25.066 Removing: /var/run/dpdk/spdk_pid58799 00:21:25.066 Removing: /var/run/dpdk/spdk_pid58839 00:21:25.066 Removing: /var/run/dpdk/spdk_pid58850 00:21:25.066 Removing: /var/run/dpdk/spdk_pid58894 00:21:25.066 Removing: /var/run/dpdk/spdk_pid58900 00:21:25.066 Removing: /var/run/dpdk/spdk_pid59031 00:21:25.066 Removing: /var/run/dpdk/spdk_pid59066 00:21:25.066 Removing: /var/run/dpdk/spdk_pid59143 00:21:25.066 Removing: /var/run/dpdk/spdk_pid59475 00:21:25.066 Removing: /var/run/dpdk/spdk_pid59487 00:21:25.066 Removing: /var/run/dpdk/spdk_pid59518 00:21:25.066 Removing: /var/run/dpdk/spdk_pid59532 00:21:25.066 Removing: /var/run/dpdk/spdk_pid59547 00:21:25.334 Removing: /var/run/dpdk/spdk_pid59566 00:21:25.334 Removing: /var/run/dpdk/spdk_pid59580 00:21:25.334 Removing: /var/run/dpdk/spdk_pid59595 00:21:25.334 Removing: /var/run/dpdk/spdk_pid59614 00:21:25.334 Removing: /var/run/dpdk/spdk_pid59628 00:21:25.334 Removing: /var/run/dpdk/spdk_pid59643 00:21:25.334 Removing: /var/run/dpdk/spdk_pid59662 00:21:25.334 Removing: /var/run/dpdk/spdk_pid59676 00:21:25.334 Removing: /var/run/dpdk/spdk_pid59691 00:21:25.334 Removing: /var/run/dpdk/spdk_pid59705 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59724 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59734 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59753 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59766 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59782 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59812 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59826 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59855 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59922 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59956 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59960 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59988 00:21:25.335 Removing: /var/run/dpdk/spdk_pid59998 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60000 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60048 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60056 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60084 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60094 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60098 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60113 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60117 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60122 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60136 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60140 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60174 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60195 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60199 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60233 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60237 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60250 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60285 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60291 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60323 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60325 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60338 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60340 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60342 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60355 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60357 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60365 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60441 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60483 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60593 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60627 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60666 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60686 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60703 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60717 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60749 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60764 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60843 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60862 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60901 00:21:25.335 Removing: /var/run/dpdk/spdk_pid60963 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61024 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61048 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61141 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61184 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61216 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61443 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61540 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61569 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61593 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61632 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61660 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61699 00:21:25.335 Removing: /var/run/dpdk/spdk_pid61725 00:21:25.335 Removing: /var/run/dpdk/spdk_pid62108 00:21:25.335 Removing: /var/run/dpdk/spdk_pid62157 00:21:25.335 Removing: /var/run/dpdk/spdk_pid62496 00:21:25.335 Removing: /var/run/dpdk/spdk_pid62952 00:21:25.335 Removing: /var/run/dpdk/spdk_pid63215 00:21:25.335 Removing: /var/run/dpdk/spdk_pid64051 00:21:25.335 Removing: /var/run/dpdk/spdk_pid64980 00:21:25.335 Removing: /var/run/dpdk/spdk_pid65097 00:21:25.335 Removing: /var/run/dpdk/spdk_pid65159 00:21:25.594 Removing: /var/run/dpdk/spdk_pid66585 00:21:25.594 Removing: /var/run/dpdk/spdk_pid66892 00:21:25.594 Removing: /var/run/dpdk/spdk_pid70590 00:21:25.594 Removing: /var/run/dpdk/spdk_pid70956 00:21:25.594 Removing: /var/run/dpdk/spdk_pid71067 00:21:25.594 Removing: /var/run/dpdk/spdk_pid71194 00:21:25.594 Removing: /var/run/dpdk/spdk_pid71215 00:21:25.594 Removing: /var/run/dpdk/spdk_pid71235 00:21:25.594 Removing: /var/run/dpdk/spdk_pid71265 00:21:25.594 Removing: /var/run/dpdk/spdk_pid71357 00:21:25.594 Removing: /var/run/dpdk/spdk_pid71487 00:21:25.594 Removing: /var/run/dpdk/spdk_pid71623 00:21:25.594 Removing: /var/run/dpdk/spdk_pid71699 00:21:25.594 Removing: /var/run/dpdk/spdk_pid71880 00:21:25.594 Removing: /var/run/dpdk/spdk_pid71948 00:21:25.594 Removing: /var/run/dpdk/spdk_pid72029 00:21:25.594 Removing: /var/run/dpdk/spdk_pid72386 00:21:25.594 Removing: /var/run/dpdk/spdk_pid72797 00:21:25.594 Removing: /var/run/dpdk/spdk_pid72798 00:21:25.594 Removing: /var/run/dpdk/spdk_pid72799 00:21:25.594 Removing: /var/run/dpdk/spdk_pid73062 00:21:25.594 Removing: /var/run/dpdk/spdk_pid73327 00:21:25.594 Removing: /var/run/dpdk/spdk_pid73707 00:21:25.594 Removing: /var/run/dpdk/spdk_pid73709 00:21:25.594 Removing: /var/run/dpdk/spdk_pid74033 00:21:25.594 Removing: /var/run/dpdk/spdk_pid74047 00:21:25.594 Removing: /var/run/dpdk/spdk_pid74071 00:21:25.594 Removing: /var/run/dpdk/spdk_pid74097 00:21:25.594 Removing: /var/run/dpdk/spdk_pid74104 00:21:25.594 Removing: /var/run/dpdk/spdk_pid74450 00:21:25.594 Removing: /var/run/dpdk/spdk_pid74493 00:21:25.594 Removing: /var/run/dpdk/spdk_pid74817 00:21:25.594 Removing: /var/run/dpdk/spdk_pid75019 00:21:25.594 Removing: /var/run/dpdk/spdk_pid75433 00:21:25.595 Removing: /var/run/dpdk/spdk_pid75991 00:21:25.595 Removing: /var/run/dpdk/spdk_pid76864 00:21:25.595 Removing: /var/run/dpdk/spdk_pid77490 00:21:25.595 Removing: /var/run/dpdk/spdk_pid77492 00:21:25.595 Removing: /var/run/dpdk/spdk_pid79522 00:21:25.595 Removing: /var/run/dpdk/spdk_pid79578 00:21:25.595 Removing: /var/run/dpdk/spdk_pid79631 00:21:25.595 Removing: /var/run/dpdk/spdk_pid79679 00:21:25.595 Removing: /var/run/dpdk/spdk_pid79800 00:21:25.595 Removing: /var/run/dpdk/spdk_pid79847 00:21:25.595 Removing: /var/run/dpdk/spdk_pid79894 00:21:25.595 Removing: /var/run/dpdk/spdk_pid79946 00:21:25.595 Removing: /var/run/dpdk/spdk_pid80307 00:21:25.595 Removing: /var/run/dpdk/spdk_pid81509 00:21:25.595 Removing: /var/run/dpdk/spdk_pid81654 00:21:25.595 Removing: /var/run/dpdk/spdk_pid81898 00:21:25.595 Removing: /var/run/dpdk/spdk_pid82501 00:21:25.595 Removing: /var/run/dpdk/spdk_pid82661 00:21:25.595 Removing: /var/run/dpdk/spdk_pid82822 00:21:25.595 Removing: /var/run/dpdk/spdk_pid82915 00:21:25.595 Removing: /var/run/dpdk/spdk_pid83080 00:21:25.595 Removing: /var/run/dpdk/spdk_pid83189 00:21:25.595 Removing: /var/run/dpdk/spdk_pid83888 00:21:25.595 Removing: /var/run/dpdk/spdk_pid83923 00:21:25.595 Removing: /var/run/dpdk/spdk_pid83958 00:21:25.595 Removing: /var/run/dpdk/spdk_pid84209 00:21:25.595 Removing: /var/run/dpdk/spdk_pid84244 00:21:25.595 Removing: /var/run/dpdk/spdk_pid84278 00:21:25.595 Removing: /var/run/dpdk/spdk_pid84748 00:21:25.595 Removing: /var/run/dpdk/spdk_pid84758 00:21:25.595 Removing: /var/run/dpdk/spdk_pid84996 00:21:25.595 Removing: /var/run/dpdk/spdk_pid85119 00:21:25.595 Removing: /var/run/dpdk/spdk_pid85128 00:21:25.595 Clean 00:21:25.854 10:43:14 -- common/autotest_common.sh@1451 -- # return 0 00:21:25.854 10:43:14 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:21:25.854 10:43:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:25.854 10:43:14 -- common/autotest_common.sh@10 -- # set +x 00:21:25.854 10:43:14 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:21:25.854 10:43:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:25.854 10:43:14 -- common/autotest_common.sh@10 -- # set +x 00:21:25.854 10:43:14 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:25.854 10:43:14 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:25.854 10:43:14 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:25.854 10:43:14 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:21:25.854 10:43:14 -- spdk/autotest.sh@394 -- # hostname 00:21:25.854 10:43:14 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:26.112 geninfo: WARNING: invalid characters removed from testname! 00:21:52.662 10:43:37 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:52.662 10:43:40 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:54.567 10:43:42 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:56.470 10:43:45 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:59.004 10:43:47 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:01.537 10:43:49 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:04.071 10:43:52 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:04.071 10:43:52 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:04.071 10:43:52 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:04.071 10:43:52 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:04.071 10:43:52 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:04.071 10:43:52 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:04.071 + [[ -n 5263 ]] 00:22:04.071 + sudo kill 5263 00:22:04.081 [Pipeline] } 00:22:04.096 [Pipeline] // timeout 00:22:04.102 [Pipeline] } 00:22:04.116 [Pipeline] // stage 00:22:04.121 [Pipeline] } 00:22:04.136 [Pipeline] // catchError 00:22:04.147 [Pipeline] stage 00:22:04.149 [Pipeline] { (Stop VM) 00:22:04.162 [Pipeline] sh 00:22:04.499 + vagrant halt 00:22:07.046 ==> default: Halting domain... 00:22:13.628 [Pipeline] sh 00:22:13.908 + vagrant destroy -f 00:22:17.195 ==> default: Removing domain... 00:22:17.208 [Pipeline] sh 00:22:17.491 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:17.500 [Pipeline] } 00:22:17.515 [Pipeline] // stage 00:22:17.520 [Pipeline] } 00:22:17.534 [Pipeline] // dir 00:22:17.540 [Pipeline] } 00:22:17.556 [Pipeline] // wrap 00:22:17.562 [Pipeline] } 00:22:17.575 [Pipeline] // catchError 00:22:17.584 [Pipeline] stage 00:22:17.586 [Pipeline] { (Epilogue) 00:22:17.600 [Pipeline] sh 00:22:17.881 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:23.166 [Pipeline] catchError 00:22:23.168 [Pipeline] { 00:22:23.181 [Pipeline] sh 00:22:23.464 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:23.723 Artifacts sizes are good 00:22:23.732 [Pipeline] } 00:22:23.747 [Pipeline] // catchError 00:22:23.760 [Pipeline] archiveArtifacts 00:22:23.767 Archiving artifacts 00:22:23.891 [Pipeline] cleanWs 00:22:23.903 [WS-CLEANUP] Deleting project workspace... 00:22:23.903 [WS-CLEANUP] Deferred wipeout is used... 00:22:23.910 [WS-CLEANUP] done 00:22:23.912 [Pipeline] } 00:22:23.927 [Pipeline] // stage 00:22:23.932 [Pipeline] } 00:22:23.946 [Pipeline] // node 00:22:23.951 [Pipeline] End of Pipeline 00:22:24.009 Finished: SUCCESS